Test Report: QEMU_macOS 18358

                    
                      2f1fe73fe0a81db98fd5a1fcfb9006c4b42c71ed:2024-03-11:33520
                    
                

Test fail (98/281)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 39.72
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 9.99
39 TestAddons/parallel/Ingress 33.42
54 TestCertOptions 10.25
55 TestCertExpiration 197.05
56 TestDockerFlags 10.17
57 TestForceSystemdFlag 10.09
58 TestForceSystemdEnv 10.25
103 TestFunctional/parallel/ServiceCmdConnect 32.26
175 TestMutliControlPlane/serial/StopSecondaryNode 214.16
176 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 103.28
177 TestMutliControlPlane/serial/RestartSecondaryNode 209.15
179 TestMutliControlPlane/serial/RestartClusterKeepsNodes 234.39
180 TestMutliControlPlane/serial/DeleteSecondaryNode 0.11
181 TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete 2
182 TestMutliControlPlane/serial/StopCluster 251.16
183 TestMutliControlPlane/serial/RestartCluster 5.26
184 TestMutliControlPlane/serial/DegradedAfterClusterRestart 0.11
185 TestMutliControlPlane/serial/AddSecondaryNode 0.08
189 TestImageBuild/serial/Setup 10.27
192 TestJSONOutput/start/Command 9.79
198 TestJSONOutput/pause/Command 0.07
204 TestJSONOutput/unpause/Command 0.04
221 TestMinikubeProfile 10.27
224 TestMountStart/serial/StartWithMountFirst 10.66
227 TestMultiNode/serial/FreshStart2Nodes 10.02
228 TestMultiNode/serial/DeployApp2Nodes 117.68
229 TestMultiNode/serial/PingHostFrom2Pods 0.09
230 TestMultiNode/serial/AddNode 0.08
231 TestMultiNode/serial/MultiNodeLabels 0.06
232 TestMultiNode/serial/ProfileList 0.1
233 TestMultiNode/serial/CopyFile 0.06
234 TestMultiNode/serial/StopNode 0.15
235 TestMultiNode/serial/StartAfterStop 51.78
236 TestMultiNode/serial/RestartKeepsNodes 8.61
237 TestMultiNode/serial/DeleteNode 0.11
238 TestMultiNode/serial/StopMultiNode 4.01
239 TestMultiNode/serial/RestartMultiNode 5.26
240 TestMultiNode/serial/ValidateNameConflict 20.24
244 TestPreload 9.89
246 TestScheduledStopUnix 9.98
247 TestSkaffold 16.57
250 TestRunningBinaryUpgrade 669.49
252 TestKubernetesUpgrade 17.36
266 TestStoppedBinaryUpgrade/Upgrade 617.45
276 TestPause/serial/Start 10.12
279 TestNoKubernetes/serial/StartWithK8s 10.11
280 TestNoKubernetes/serial/StartWithStopK8s 5.93
281 TestNoKubernetes/serial/Start 5.93
282 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.45
286 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.97
287 TestNoKubernetes/serial/StartNoArgs 5.87
289 TestNetworkPlugins/group/auto/Start 9.84
290 TestNetworkPlugins/group/calico/Start 9.81
291 TestNetworkPlugins/group/custom-flannel/Start 9.8
292 TestNetworkPlugins/group/false/Start 9.86
293 TestNetworkPlugins/group/kindnet/Start 9.8
294 TestNetworkPlugins/group/flannel/Start 9.9
295 TestNetworkPlugins/group/enable-default-cni/Start 9.91
296 TestNetworkPlugins/group/bridge/Start 9.75
297 TestNetworkPlugins/group/kubenet/Start 9.82
299 TestStartStop/group/old-k8s-version/serial/FirstStart 9.77
300 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
304 TestStartStop/group/old-k8s-version/serial/SecondStart 5.27
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.04
306 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
307 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
308 TestStartStop/group/old-k8s-version/serial/Pause 0.11
310 TestStartStop/group/no-preload/serial/FirstStart 9.91
311 TestStartStop/group/no-preload/serial/DeployApp 0.09
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
315 TestStartStop/group/no-preload/serial/SecondStart 5.26
316 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
319 TestStartStop/group/no-preload/serial/Pause 0.11
321 TestStartStop/group/embed-certs/serial/FirstStart 9.89
322 TestStartStop/group/embed-certs/serial/DeployApp 0.09
323 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
326 TestStartStop/group/embed-certs/serial/SecondStart 5.22
327 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
328 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
329 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
330 TestStartStop/group/embed-certs/serial/Pause 0.11
332 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.87
334 TestStartStop/group/newest-cni/serial/FirstStart 9.92
335 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
339 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.49
344 TestStartStop/group/newest-cni/serial/SecondStart 5.26
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
351 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
352 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (39.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-006000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-006000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (39.721281042s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"894cf63e-9a86-4095-a6fb-5afb29a1dd8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-006000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"122630f1-c3ec-472f-9be4-91d39e19cf5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18358"}}
	{"specversion":"1.0","id":"f5cae0c7-6f0a-405e-9be1-424b48765f53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig"}}
	{"specversion":"1.0","id":"a9d80a86-422d-4d4f-8106-f1d958b97715","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"5a245739-1a6e-4fda-a19d-732a0daff3b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4caa573c-773d-491d-ad75-f4dec8751362","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube"}}
	{"specversion":"1.0","id":"f8818b49-22c8-4216-a7a2-b5c6b1d538fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"c4da0afa-0c6a-40cc-9f78-56969e17ccc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"dd51a537-8848-4ad3-9e79-d4f3076ea6f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"61c72327-da46-4157-9165-5c35df569461","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"66b73030-50dc-4476-ae89-388920a5b4cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-006000\" primary control-plane node in \"download-only-006000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"add98671-b231-4e39-87d0-dd38e92b0179","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b8f092bd-575f-4d4a-9ecd-fe252f5db064","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10876f1c0 0x10876f1c0 0x10876f1c0 0x10876f1c0 0x10876f1c0 0x10876f1c0 0x10876f1c0] Decompressors:map[bz2:0x14000a00290 gz:0x14000a00298 tar:0x14000a00240 tar.bz2:0x14000a00250 tar.gz:0x14000a00260 tar.xz:0x14000a00270 tar.zst:0x14000a00280 tbz2:0x14000a00250 tgz:0x14
000a00260 txz:0x14000a00270 tzst:0x14000a00280 xz:0x14000a002a0 zip:0x14000a002b0 zst:0x14000a002a8] Getters:map[file:0x14000640d10 http:0x14000cb8690 https:0x14000cb86e0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"243ebbc6-85a8-4488-99a5-78f29faf7311","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:09:36.858016    1654 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:09:36.858153    1654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:09:36.858156    1654 out.go:304] Setting ErrFile to fd 2...
	I0311 13:09:36.858159    1654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:09:36.858280    1654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	W0311 13:09:36.858389    1654 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18358-1220/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18358-1220/.minikube/config/config.json: no such file or directory
	I0311 13:09:36.859631    1654 out.go:298] Setting JSON to true
	I0311 13:09:36.876725    1654 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":547,"bootTime":1710187229,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 13:09:36.876795    1654 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 13:09:36.882542    1654 out.go:97] [download-only-006000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 13:09:36.885561    1654 out.go:169] MINIKUBE_LOCATION=18358
	I0311 13:09:36.882698    1654 notify.go:220] Checking for updates...
	W0311 13:09:36.882707    1654 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball: no such file or directory
	I0311 13:09:36.892504    1654 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:09:36.895545    1654 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 13:09:36.898586    1654 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:09:36.901562    1654 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	W0311 13:09:36.907605    1654 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0311 13:09:36.907850    1654 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:09:36.912509    1654 out.go:97] Using the qemu2 driver based on user configuration
	I0311 13:09:36.912530    1654 start.go:297] selected driver: qemu2
	I0311 13:09:36.912545    1654 start.go:901] validating driver "qemu2" against <nil>
	I0311 13:09:36.912606    1654 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 13:09:36.916529    1654 out.go:169] Automatically selected the socket_vmnet network
	I0311 13:09:36.922386    1654 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0311 13:09:36.922495    1654 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 13:09:36.922542    1654 cni.go:84] Creating CNI manager for ""
	I0311 13:09:36.922560    1654 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0311 13:09:36.922614    1654 start.go:340] cluster config:
	{Name:download-only-006000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-006000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:09:36.928395    1654 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:09:36.932510    1654 out.go:97] Downloading VM boot image ...
	I0311 13:09:36.932522    1654 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso
	I0311 13:09:55.125133    1654 out.go:97] Starting "download-only-006000" primary control-plane node in "download-only-006000" cluster
	I0311 13:09:55.125161    1654 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0311 13:09:55.430125    1654 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0311 13:09:55.430234    1654 cache.go:56] Caching tarball of preloaded images
	I0311 13:09:55.431020    1654 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0311 13:09:55.436517    1654 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0311 13:09:55.436544    1654 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0311 13:09:56.129220    1654 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0311 13:10:15.088387    1654 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0311 13:10:15.088551    1654 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0311 13:10:15.790088    1654 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0311 13:10:15.790293    1654 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/download-only-006000/config.json ...
	I0311 13:10:15.790309    1654 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/download-only-006000/config.json: {Name:mk7542d81dad174abfa1be338e75785717485840 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:10:15.790537    1654 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0311 13:10:15.790714    1654 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0311 13:10:16.504466    1654 out.go:169] 
	W0311 13:10:16.509367    1654 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10876f1c0 0x10876f1c0 0x10876f1c0 0x10876f1c0 0x10876f1c0 0x10876f1c0 0x10876f1c0] Decompressors:map[bz2:0x14000a00290 gz:0x14000a00298 tar:0x14000a00240 tar.bz2:0x14000a00250 tar.gz:0x14000a00260 tar.xz:0x14000a00270 tar.zst:0x14000a00280 tbz2:0x14000a00250 tgz:0x14000a00260 txz:0x14000a00270 tzst:0x14000a00280 xz:0x14000a002a0 zip:0x14000a002b0 zst:0x14000a002a8] Getters:map[file:0x14000640d10 http:0x14000cb8690 https:0x14000cb86e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0311 13:10:16.509389    1654 out_reason.go:110] 
	W0311 13:10:16.517366    1654 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 13:10:16.521278    1654 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-006000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (39.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-485000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-485000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.822165209s)

                                                
                                                
-- stdout --
	* [offline-docker-485000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-485000" primary control-plane node in "offline-docker-485000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-485000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:51:25.607427    3938 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:51:25.607572    3938 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:51:25.607576    3938 out.go:304] Setting ErrFile to fd 2...
	I0311 13:51:25.607579    3938 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:51:25.607707    3938 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:51:25.608926    3938 out.go:298] Setting JSON to false
	I0311 13:51:25.626281    3938 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3056,"bootTime":1710187229,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 13:51:25.626355    3938 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 13:51:25.632238    3938 out.go:177] * [offline-docker-485000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 13:51:25.640185    3938 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 13:51:25.640216    3938 notify.go:220] Checking for updates...
	I0311 13:51:25.648148    3938 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:51:25.651102    3938 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 13:51:25.654155    3938 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:51:25.657170    3938 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 13:51:25.660093    3938 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 13:51:25.663481    3938 config.go:182] Loaded profile config "multinode-457000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:51:25.663551    3938 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:51:25.667156    3938 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 13:51:25.674119    3938 start.go:297] selected driver: qemu2
	I0311 13:51:25.674129    3938 start.go:901] validating driver "qemu2" against <nil>
	I0311 13:51:25.674139    3938 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 13:51:25.676201    3938 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 13:51:25.679165    3938 out.go:177] * Automatically selected the socket_vmnet network
	I0311 13:51:25.682211    3938 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 13:51:25.682228    3938 cni.go:84] Creating CNI manager for ""
	I0311 13:51:25.682235    3938 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 13:51:25.682242    3938 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 13:51:25.682275    3938 start.go:340] cluster config:
	{Name:offline-docker-485000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-485000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:51:25.686597    3938 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:51:25.694155    3938 out.go:177] * Starting "offline-docker-485000" primary control-plane node in "offline-docker-485000" cluster
	I0311 13:51:25.698191    3938 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 13:51:25.698220    3938 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 13:51:25.698234    3938 cache.go:56] Caching tarball of preloaded images
	I0311 13:51:25.698306    3938 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 13:51:25.698312    3938 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 13:51:25.698372    3938 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/offline-docker-485000/config.json ...
	I0311 13:51:25.698383    3938 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/offline-docker-485000/config.json: {Name:mk706d1cd5f5abf835c9fd4d6d9be125f4e79491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:51:25.698581    3938 start.go:360] acquireMachinesLock for offline-docker-485000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 13:51:25.698610    3938 start.go:364] duration metric: took 22.417µs to acquireMachinesLock for "offline-docker-485000"
	I0311 13:51:25.698623    3938 start.go:93] Provisioning new machine with config: &{Name:offline-docker-485000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-485000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 13:51:25.698651    3938 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 13:51:25.707145    3938 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0311 13:51:25.722504    3938 start.go:159] libmachine.API.Create for "offline-docker-485000" (driver="qemu2")
	I0311 13:51:25.722530    3938 client.go:168] LocalClient.Create starting
	I0311 13:51:25.722619    3938 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 13:51:25.722661    3938 main.go:141] libmachine: Decoding PEM data...
	I0311 13:51:25.722672    3938 main.go:141] libmachine: Parsing certificate...
	I0311 13:51:25.722718    3938 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 13:51:25.722739    3938 main.go:141] libmachine: Decoding PEM data...
	I0311 13:51:25.722751    3938 main.go:141] libmachine: Parsing certificate...
	I0311 13:51:25.723137    3938 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 13:51:25.860954    3938 main.go:141] libmachine: Creating SSH key...
	I0311 13:51:25.982555    3938 main.go:141] libmachine: Creating Disk image...
	I0311 13:51:25.982568    3938 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 13:51:25.982830    3938 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/offline-docker-485000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/offline-docker-485000/disk.qcow2
	I0311 13:51:26.002342    3938 main.go:141] libmachine: STDOUT: 
	I0311 13:51:26.002369    3938 main.go:141] libmachine: STDERR: 
	I0311 13:51:26.002421    3938 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/offline-docker-485000/disk.qcow2 +20000M
	I0311 13:51:26.013771    3938 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 13:51:26.013796    3938 main.go:141] libmachine: STDERR: 
	I0311 13:51:26.013824    3938 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/offline-docker-485000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/offline-docker-485000/disk.qcow2
	I0311 13:51:26.013829    3938 main.go:141] libmachine: Starting QEMU VM...
	I0311 13:51:26.013858    3938 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/offline-docker-485000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/offline-docker-485000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/offline-docker-485000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:f9:ac:2d:65:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/offline-docker-485000/disk.qcow2
	I0311 13:51:26.015699    3938 main.go:141] libmachine: STDOUT: 
	I0311 13:51:26.015717    3938 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 13:51:26.015741    3938 client.go:171] duration metric: took 293.211208ms to LocalClient.Create
	I0311 13:51:28.015950    3938 start.go:128] duration metric: took 2.3173665s to createHost
	I0311 13:51:28.015983    3938 start.go:83] releasing machines lock for "offline-docker-485000", held for 2.317436375s
	W0311 13:51:28.015997    3938 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:51:28.024487    3938 out.go:177] * Deleting "offline-docker-485000" in qemu2 ...
	W0311 13:51:28.033212    3938 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:51:28.033223    3938 start.go:728] Will try again in 5 seconds ...
	I0311 13:51:33.035236    3938 start.go:360] acquireMachinesLock for offline-docker-485000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 13:51:33.035677    3938 start.go:364] duration metric: took 344.084µs to acquireMachinesLock for "offline-docker-485000"
	I0311 13:51:33.035937    3938 start.go:93] Provisioning new machine with config: &{Name:offline-docker-485000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-485000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 13:51:33.036215    3938 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 13:51:33.041918    3938 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0311 13:51:33.087559    3938 start.go:159] libmachine.API.Create for "offline-docker-485000" (driver="qemu2")
	I0311 13:51:33.087617    3938 client.go:168] LocalClient.Create starting
	I0311 13:51:33.087729    3938 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 13:51:33.087792    3938 main.go:141] libmachine: Decoding PEM data...
	I0311 13:51:33.087807    3938 main.go:141] libmachine: Parsing certificate...
	I0311 13:51:33.087870    3938 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 13:51:33.087911    3938 main.go:141] libmachine: Decoding PEM data...
	I0311 13:51:33.087926    3938 main.go:141] libmachine: Parsing certificate...
	I0311 13:51:33.088448    3938 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 13:51:33.242718    3938 main.go:141] libmachine: Creating SSH key...
	I0311 13:51:33.324971    3938 main.go:141] libmachine: Creating Disk image...
	I0311 13:51:33.324977    3938 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 13:51:33.325153    3938 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/offline-docker-485000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/offline-docker-485000/disk.qcow2
	I0311 13:51:33.337196    3938 main.go:141] libmachine: STDOUT: 
	I0311 13:51:33.337216    3938 main.go:141] libmachine: STDERR: 
	I0311 13:51:33.337266    3938 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/offline-docker-485000/disk.qcow2 +20000M
	I0311 13:51:33.347834    3938 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 13:51:33.347848    3938 main.go:141] libmachine: STDERR: 
	I0311 13:51:33.347858    3938 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/offline-docker-485000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/offline-docker-485000/disk.qcow2
	I0311 13:51:33.347863    3938 main.go:141] libmachine: Starting QEMU VM...
	I0311 13:51:33.347897    3938 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/offline-docker-485000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/offline-docker-485000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/offline-docker-485000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:ab:7e:78:f2:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/offline-docker-485000/disk.qcow2
	I0311 13:51:33.349453    3938 main.go:141] libmachine: STDOUT: 
	I0311 13:51:33.349470    3938 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 13:51:33.349482    3938 client.go:171] duration metric: took 261.869333ms to LocalClient.Create
	I0311 13:51:35.351594    3938 start.go:128] duration metric: took 2.315423833s to createHost
	I0311 13:51:35.351689    3938 start.go:83] releasing machines lock for "offline-docker-485000", held for 2.316061375s
	W0311 13:51:35.352011    3938 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-485000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-485000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:51:35.365579    3938 out.go:177] 
	W0311 13:51:35.368672    3938 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 13:51:35.368738    3938 out.go:239] * 
	* 
	W0311 13:51:35.377301    3938 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 13:51:35.383447    3938 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-485000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-03-11 13:51:35.398545 -0700 PDT m=+2518.721128793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-485000 -n offline-docker-485000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-485000 -n offline-docker-485000: exit status 7 (52.830917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-485000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-485000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-485000
--- FAIL: TestOffline (9.99s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (33.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-212000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-212000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-212000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ae131844-52af-4bb0-a405-a85b1839a298] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ae131844-52af-4bb0-a405-a85b1839a298] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004160208s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-arm64 -p addons-212000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-212000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-darwin-arm64 -p addons-212000 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.2: exit status 1 (15.033322125s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-darwin-arm64 -p addons-212000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 -p addons-212000 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-darwin-arm64 -p addons-212000 addons disable ingress --alsologtostderr -v=1: (7.229369625s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-212000 -n addons-212000
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-212000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 11 Mar 24 13:10 PDT | 11 Mar 24 13:10 PDT |
	| delete  | -p download-only-707000                                                                     | download-only-707000 | jenkins | v1.32.0 | 11 Mar 24 13:10 PDT | 11 Mar 24 13:10 PDT |
	| start   | -o=json --download-only                                                                     | download-only-016000 | jenkins | v1.32.0 | 11 Mar 24 13:10 PDT |                     |
	|         | -p download-only-016000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 11 Mar 24 13:11 PDT | 11 Mar 24 13:11 PDT |
	| delete  | -p download-only-016000                                                                     | download-only-016000 | jenkins | v1.32.0 | 11 Mar 24 13:11 PDT | 11 Mar 24 13:11 PDT |
	| delete  | -p download-only-006000                                                                     | download-only-006000 | jenkins | v1.32.0 | 11 Mar 24 13:11 PDT | 11 Mar 24 13:11 PDT |
	| delete  | -p download-only-707000                                                                     | download-only-707000 | jenkins | v1.32.0 | 11 Mar 24 13:11 PDT | 11 Mar 24 13:11 PDT |
	| delete  | -p download-only-016000                                                                     | download-only-016000 | jenkins | v1.32.0 | 11 Mar 24 13:11 PDT | 11 Mar 24 13:11 PDT |
	| start   | --download-only -p                                                                          | binary-mirror-961000 | jenkins | v1.32.0 | 11 Mar 24 13:11 PDT |                     |
	|         | binary-mirror-961000                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49328                                                                      |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-961000                                                                     | binary-mirror-961000 | jenkins | v1.32.0 | 11 Mar 24 13:11 PDT | 11 Mar 24 13:11 PDT |
	| addons  | disable dashboard -p                                                                        | addons-212000        | jenkins | v1.32.0 | 11 Mar 24 13:11 PDT |                     |
	|         | addons-212000                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-212000        | jenkins | v1.32.0 | 11 Mar 24 13:11 PDT |                     |
	|         | addons-212000                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-212000 --wait=true                                                                | addons-212000        | jenkins | v1.32.0 | 11 Mar 24 13:11 PDT | 11 Mar 24 13:14 PDT |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=qemu2                                                                |                      |         |         |                     |                     |
	|         |  --addons=ingress                                                                           |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| ip      | addons-212000 ip                                                                            | addons-212000        | jenkins | v1.32.0 | 11 Mar 24 13:14 PDT | 11 Mar 24 13:14 PDT |
	| addons  | addons-212000 addons disable                                                                | addons-212000        | jenkins | v1.32.0 | 11 Mar 24 13:14 PDT | 11 Mar 24 13:14 PDT |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-212000 addons                                                                        | addons-212000        | jenkins | v1.32.0 | 11 Mar 24 13:14 PDT | 11 Mar 24 13:14 PDT |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-212000        | jenkins | v1.32.0 | 11 Mar 24 13:14 PDT | 11 Mar 24 13:15 PDT |
	|         | addons-212000                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-212000 ssh curl -s                                                                   | addons-212000        | jenkins | v1.32.0 | 11 Mar 24 13:15 PDT | 11 Mar 24 13:15 PDT |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-212000 ip                                                                            | addons-212000        | jenkins | v1.32.0 | 11 Mar 24 13:15 PDT | 11 Mar 24 13:15 PDT |
	| addons  | addons-212000 addons                                                                        | addons-212000        | jenkins | v1.32.0 | 11 Mar 24 13:15 PDT | 11 Mar 24 13:15 PDT |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-212000 addons                                                                        | addons-212000        | jenkins | v1.32.0 | 11 Mar 24 13:15 PDT | 11 Mar 24 13:15 PDT |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-212000 addons disable                                                                | addons-212000        | jenkins | v1.32.0 | 11 Mar 24 13:15 PDT | 11 Mar 24 13:15 PDT |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-212000 addons disable                                                                | addons-212000        | jenkins | v1.32.0 | 11 Mar 24 13:15 PDT | 11 Mar 24 13:15 PDT |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| ssh     | addons-212000 ssh cat                                                                       | addons-212000        | jenkins | v1.32.0 | 11 Mar 24 13:15 PDT | 11 Mar 24 13:15 PDT |
	|         | /opt/local-path-provisioner/pvc-96ca7c81-3edb-43c3-9d40-7db83042191a_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-212000 addons disable                                                                | addons-212000        | jenkins | v1.32.0 | 11 Mar 24 13:15 PDT |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 13:11:05
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 13:11:05.400344    1833 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:11:05.400467    1833 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:11:05.400471    1833 out.go:304] Setting ErrFile to fd 2...
	I0311 13:11:05.400473    1833 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:11:05.400595    1833 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:11:05.401609    1833 out.go:298] Setting JSON to false
	I0311 13:11:05.417626    1833 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":636,"bootTime":1710187229,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 13:11:05.417691    1833 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 13:11:05.422968    1833 out.go:177] * [addons-212000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 13:11:05.428940    1833 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 13:11:05.432917    1833 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:11:05.428993    1833 notify.go:220] Checking for updates...
	I0311 13:11:05.438938    1833 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 13:11:05.441914    1833 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:11:05.444949    1833 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 13:11:05.447871    1833 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 13:11:05.451192    1833 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:11:05.455951    1833 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 13:11:05.462894    1833 start.go:297] selected driver: qemu2
	I0311 13:11:05.462899    1833 start.go:901] validating driver "qemu2" against <nil>
	I0311 13:11:05.462905    1833 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 13:11:05.465220    1833 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 13:11:05.468965    1833 out.go:177] * Automatically selected the socket_vmnet network
	I0311 13:11:05.470478    1833 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 13:11:05.470517    1833 cni.go:84] Creating CNI manager for ""
	I0311 13:11:05.470523    1833 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 13:11:05.470527    1833 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 13:11:05.470571    1833 start.go:340] cluster config:
	{Name:addons-212000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-212000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:11:05.474975    1833 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:11:05.478930    1833 out.go:177] * Starting "addons-212000" primary control-plane node in "addons-212000" cluster
	I0311 13:11:05.486888    1833 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 13:11:05.486900    1833 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 13:11:05.486908    1833 cache.go:56] Caching tarball of preloaded images
	I0311 13:11:05.486963    1833 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 13:11:05.486968    1833 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 13:11:05.487182    1833 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/config.json ...
	I0311 13:11:05.487192    1833 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/config.json: {Name:mkeb1906ca92c398e09ab835a8c98ee553570088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:11:05.487400    1833 start.go:360] acquireMachinesLock for addons-212000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 13:11:05.487562    1833 start.go:364] duration metric: took 156.167µs to acquireMachinesLock for "addons-212000"
	I0311 13:11:05.487573    1833 start.go:93] Provisioning new machine with config: &{Name:addons-212000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:addons-212000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 13:11:05.487597    1833 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 13:11:05.491959    1833 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0311 13:11:05.723562    1833 start.go:159] libmachine.API.Create for "addons-212000" (driver="qemu2")
	I0311 13:11:05.723609    1833 client.go:168] LocalClient.Create starting
	I0311 13:11:05.723771    1833 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 13:11:05.863726    1833 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 13:11:05.965440    1833 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 13:11:06.668446    1833 main.go:141] libmachine: Creating SSH key...
	I0311 13:11:06.767734    1833 main.go:141] libmachine: Creating Disk image...
	I0311 13:11:06.767738    1833 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 13:11:06.767952    1833 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/disk.qcow2
	I0311 13:11:06.790144    1833 main.go:141] libmachine: STDOUT: 
	I0311 13:11:06.790172    1833 main.go:141] libmachine: STDERR: 
	I0311 13:11:06.790228    1833 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/disk.qcow2 +20000M
	I0311 13:11:06.800881    1833 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 13:11:06.800907    1833 main.go:141] libmachine: STDERR: 
	I0311 13:11:06.800919    1833 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/disk.qcow2
	I0311 13:11:06.800924    1833 main.go:141] libmachine: Starting QEMU VM...
	I0311 13:11:06.800961    1833 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:fb:f8:92:c9:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/disk.qcow2
	I0311 13:11:06.857162    1833 main.go:141] libmachine: STDOUT: 
	I0311 13:11:06.857203    1833 main.go:141] libmachine: STDERR: 
	I0311 13:11:06.857207    1833 main.go:141] libmachine: Attempt 0
	I0311 13:11:06.857229    1833 main.go:141] libmachine: Searching for 52:fb:f8:92:c9:d8 in /var/db/dhcpd_leases ...
	I0311 13:11:06.857275    1833 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0311 13:11:06.857293    1833 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65f0b663}
	I0311 13:11:08.859397    1833 main.go:141] libmachine: Attempt 1
	I0311 13:11:08.859470    1833 main.go:141] libmachine: Searching for 52:fb:f8:92:c9:d8 in /var/db/dhcpd_leases ...
	I0311 13:11:08.859779    1833 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0311 13:11:08.859827    1833 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65f0b663}
	I0311 13:11:10.861953    1833 main.go:141] libmachine: Attempt 2
	I0311 13:11:10.862010    1833 main.go:141] libmachine: Searching for 52:fb:f8:92:c9:d8 in /var/db/dhcpd_leases ...
	I0311 13:11:10.862121    1833 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0311 13:11:10.862153    1833 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65f0b663}
	I0311 13:11:12.864247    1833 main.go:141] libmachine: Attempt 3
	I0311 13:11:12.864316    1833 main.go:141] libmachine: Searching for 52:fb:f8:92:c9:d8 in /var/db/dhcpd_leases ...
	I0311 13:11:12.864392    1833 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0311 13:11:12.864427    1833 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65f0b663}
	I0311 13:11:14.866417    1833 main.go:141] libmachine: Attempt 4
	I0311 13:11:14.866424    1833 main.go:141] libmachine: Searching for 52:fb:f8:92:c9:d8 in /var/db/dhcpd_leases ...
	I0311 13:11:14.866456    1833 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0311 13:11:14.866461    1833 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65f0b663}
	I0311 13:11:16.868439    1833 main.go:141] libmachine: Attempt 5
	I0311 13:11:16.868447    1833 main.go:141] libmachine: Searching for 52:fb:f8:92:c9:d8 in /var/db/dhcpd_leases ...
	I0311 13:11:16.868474    1833 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0311 13:11:16.868480    1833 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65f0b663}
	I0311 13:11:18.870479    1833 main.go:141] libmachine: Attempt 6
	I0311 13:11:18.870499    1833 main.go:141] libmachine: Searching for 52:fb:f8:92:c9:d8 in /var/db/dhcpd_leases ...
	I0311 13:11:18.870571    1833 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0311 13:11:18.870581    1833 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x65f0b663}
	I0311 13:11:20.872717    1833 main.go:141] libmachine: Attempt 7
	I0311 13:11:20.872837    1833 main.go:141] libmachine: Searching for 52:fb:f8:92:c9:d8 in /var/db/dhcpd_leases ...
	I0311 13:11:20.873163    1833 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0311 13:11:20.873213    1833 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:52:fb:f8:92:c9:d8 ID:1,52:fb:f8:92:c9:d8 Lease:0x65f0b6e7}
	I0311 13:11:20.873230    1833 main.go:141] libmachine: Found match: 52:fb:f8:92:c9:d8
	I0311 13:11:20.873262    1833 main.go:141] libmachine: IP: 192.168.105.2
	I0311 13:11:20.873286    1833 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0311 13:11:23.896353    1833 machine.go:94] provisionDockerMachine start ...
	I0311 13:11:23.898213    1833 main.go:141] libmachine: Using SSH client type: native
	I0311 13:11:23.898691    1833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c31a90] 0x100c342f0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0311 13:11:23.898705    1833 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 13:11:23.965688    1833 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 13:11:23.965720    1833 buildroot.go:166] provisioning hostname "addons-212000"
	I0311 13:11:23.965833    1833 main.go:141] libmachine: Using SSH client type: native
	I0311 13:11:23.966059    1833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c31a90] 0x100c342f0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0311 13:11:23.966069    1833 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-212000 && echo "addons-212000" | sudo tee /etc/hostname
	I0311 13:11:24.028245    1833 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-212000
	
	I0311 13:11:24.028307    1833 main.go:141] libmachine: Using SSH client type: native
	I0311 13:11:24.028453    1833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c31a90] 0x100c342f0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0311 13:11:24.028463    1833 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-212000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-212000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-212000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 13:11:24.080399    1833 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 13:11:24.080415    1833 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18358-1220/.minikube CaCertPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18358-1220/.minikube}
	I0311 13:11:24.080432    1833 buildroot.go:174] setting up certificates
	I0311 13:11:24.080437    1833 provision.go:84] configureAuth start
	I0311 13:11:24.080442    1833 provision.go:143] copyHostCerts
	I0311 13:11:24.080543    1833 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18358-1220/.minikube/ca.pem (1082 bytes)
	I0311 13:11:24.080764    1833 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18358-1220/.minikube/cert.pem (1123 bytes)
	I0311 13:11:24.080882    1833 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18358-1220/.minikube/key.pem (1675 bytes)
	I0311 13:11:24.080970    1833 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca-key.pem org=jenkins.addons-212000 san=[127.0.0.1 192.168.105.2 addons-212000 localhost minikube]
	I0311 13:11:24.150644    1833 provision.go:177] copyRemoteCerts
	I0311 13:11:24.150692    1833 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 13:11:24.150709    1833 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/id_rsa Username:docker}
	I0311 13:11:24.176024    1833 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0311 13:11:24.184085    1833 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0311 13:11:24.192452    1833 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 13:11:24.201382    1833 provision.go:87] duration metric: took 120.936459ms to configureAuth
	I0311 13:11:24.201393    1833 buildroot.go:189] setting minikube options for container-runtime
	I0311 13:11:24.201524    1833 config.go:182] Loaded profile config "addons-212000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:11:24.201561    1833 main.go:141] libmachine: Using SSH client type: native
	I0311 13:11:24.201655    1833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c31a90] 0x100c342f0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0311 13:11:24.201659    1833 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0311 13:11:24.248116    1833 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0311 13:11:24.248128    1833 buildroot.go:70] root file system type: tmpfs
	I0311 13:11:24.248188    1833 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0311 13:11:24.248254    1833 main.go:141] libmachine: Using SSH client type: native
	I0311 13:11:24.248372    1833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c31a90] 0x100c342f0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0311 13:11:24.248407    1833 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0311 13:11:24.299414    1833 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0311 13:11:24.299468    1833 main.go:141] libmachine: Using SSH client type: native
	I0311 13:11:24.299577    1833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c31a90] 0x100c342f0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0311 13:11:24.299586    1833 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0311 13:11:24.649149    1833 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0311 13:11:24.649162    1833 machine.go:97] duration metric: took 752.800125ms to provisionDockerMachine
	I0311 13:11:24.649168    1833 client.go:171] duration metric: took 18.9260835s to LocalClient.Create
	I0311 13:11:24.649183    1833 start.go:167] duration metric: took 18.926156708s to libmachine.API.Create "addons-212000"
	I0311 13:11:24.649188    1833 start.go:293] postStartSetup for "addons-212000" (driver="qemu2")
	I0311 13:11:24.649194    1833 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 13:11:24.649260    1833 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 13:11:24.649269    1833 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/id_rsa Username:docker}
	I0311 13:11:24.675298    1833 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 13:11:24.676859    1833 info.go:137] Remote host: Buildroot 2023.02.9
	I0311 13:11:24.676873    1833 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18358-1220/.minikube/addons for local assets ...
	I0311 13:11:24.676955    1833 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18358-1220/.minikube/files for local assets ...
	I0311 13:11:24.676985    1833 start.go:296] duration metric: took 27.794833ms for postStartSetup
	I0311 13:11:24.677373    1833 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/config.json ...
	I0311 13:11:24.677555    1833 start.go:128] duration metric: took 19.190491625s to createHost
	I0311 13:11:24.677585    1833 main.go:141] libmachine: Using SSH client type: native
	I0311 13:11:24.677676    1833 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c31a90] 0x100c342f0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0311 13:11:24.677683    1833 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0311 13:11:24.719358    1833 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710187884.750385169
	
	I0311 13:11:24.719367    1833 fix.go:216] guest clock: 1710187884.750385169
	I0311 13:11:24.719372    1833 fix.go:229] Guest: 2024-03-11 13:11:24.750385169 -0700 PDT Remote: 2024-03-11 13:11:24.677558 -0700 PDT m=+19.299961334 (delta=72.827169ms)
	I0311 13:11:24.719383    1833 fix.go:200] guest clock delta is within tolerance: 72.827169ms
	I0311 13:11:24.719387    1833 start.go:83] releasing machines lock for "addons-212000", held for 19.232358375s
	I0311 13:11:24.719732    1833 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 13:11:24.719732    1833 ssh_runner.go:195] Run: cat /version.json
	I0311 13:11:24.719762    1833 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/id_rsa Username:docker}
	I0311 13:11:24.719760    1833 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/id_rsa Username:docker}
	I0311 13:11:24.836867    1833 ssh_runner.go:195] Run: systemctl --version
	I0311 13:11:24.839422    1833 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 13:11:24.841678    1833 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 13:11:24.841708    1833 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 13:11:24.848097    1833 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 13:11:24.848104    1833 start.go:494] detecting cgroup driver to use...
	I0311 13:11:24.848240    1833 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 13:11:24.855654    1833 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0311 13:11:24.859803    1833 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0311 13:11:24.863702    1833 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0311 13:11:24.863732    1833 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0311 13:11:24.867646    1833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0311 13:11:24.871568    1833 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0311 13:11:24.875549    1833 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0311 13:11:24.879571    1833 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 13:11:24.883545    1833 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0311 13:11:24.887571    1833 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 13:11:24.891368    1833 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 13:11:24.895515    1833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:11:24.976033    1833 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0311 13:11:24.986773    1833 start.go:494] detecting cgroup driver to use...
	I0311 13:11:24.986834    1833 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0311 13:11:24.992654    1833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 13:11:24.998140    1833 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 13:11:25.004584    1833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 13:11:25.009955    1833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0311 13:11:25.015198    1833 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0311 13:11:25.067289    1833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0311 13:11:25.073953    1833 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 13:11:25.080765    1833 ssh_runner.go:195] Run: which cri-dockerd
	I0311 13:11:25.082113    1833 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0311 13:11:25.085255    1833 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0311 13:11:25.091207    1833 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0311 13:11:25.187950    1833 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0311 13:11:25.269341    1833 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0311 13:11:25.269416    1833 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0311 13:11:25.277509    1833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:11:25.358717    1833 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0311 13:11:26.513691    1833 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.154988541s)
	I0311 13:11:26.513763    1833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0311 13:11:26.519208    1833 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0311 13:11:26.526474    1833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0311 13:11:26.531878    1833 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0311 13:11:26.612886    1833 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0311 13:11:26.696888    1833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:11:26.785561    1833 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0311 13:11:26.792147    1833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0311 13:11:26.797417    1833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:11:26.879442    1833 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0311 13:11:26.903650    1833 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0311 13:11:26.903734    1833 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0311 13:11:26.906084    1833 start.go:562] Will wait 60s for crictl version
	I0311 13:11:26.906137    1833 ssh_runner.go:195] Run: which crictl
	I0311 13:11:26.907622    1833 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 13:11:26.934030    1833 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0311 13:11:26.934099    1833 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0311 13:11:26.945441    1833 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0311 13:11:26.956052    1833 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0311 13:11:26.956206    1833 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0311 13:11:26.957698    1833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 13:11:26.961899    1833 kubeadm.go:877] updating cluster {Name:addons-212000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:addons-212000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 13:11:26.961948    1833 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 13:11:26.961991    1833 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0311 13:11:26.967415    1833 docker.go:685] Got preloaded images: 
	I0311 13:11:26.967423    1833 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0311 13:11:26.967454    1833 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0311 13:11:26.971389    1833 ssh_runner.go:195] Run: which lz4
	I0311 13:11:26.972839    1833 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0311 13:11:26.974237    1833 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 13:11:26.974248    1833 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (357941720 bytes)
	I0311 13:11:28.243509    1833 docker.go:649] duration metric: took 1.270729458s to copy over tarball
	I0311 13:11:28.243578    1833 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 13:11:29.309623    1833 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.066054166s)
	I0311 13:11:29.309640    1833 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 13:11:29.325452    1833 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0311 13:11:29.328866    1833 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0311 13:11:29.334601    1833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:11:29.417554    1833 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0311 13:11:32.086570    1833 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.669063041s)
	I0311 13:11:32.086650    1833 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0311 13:11:32.092703    1833 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0311 13:11:32.092713    1833 cache_images.go:84] Images are preloaded, skipping loading
	I0311 13:11:32.092719    1833 kubeadm.go:928] updating node { 192.168.105.2 8443 v1.28.4 docker true true} ...
	I0311 13:11:32.092788    1833 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-212000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-212000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 13:11:32.092845    1833 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0311 13:11:32.100855    1833 cni.go:84] Creating CNI manager for ""
	I0311 13:11:32.100870    1833 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 13:11:32.100886    1833 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 13:11:32.100896    1833 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-212000 NodeName:addons-212000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 13:11:32.100961    1833 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-212000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 13:11:32.101030    1833 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 13:11:32.104467    1833 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 13:11:32.104498    1833 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 13:11:32.107832    1833 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0311 13:11:32.113633    1833 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 13:11:32.119451    1833 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0311 13:11:32.125310    1833 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0311 13:11:32.126649    1833 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 13:11:32.130876    1833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:11:32.196211    1833 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 13:11:32.206053    1833 certs.go:68] Setting up /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000 for IP: 192.168.105.2
	I0311 13:11:32.206065    1833 certs.go:194] generating shared ca certs ...
	I0311 13:11:32.206073    1833 certs.go:226] acquiring lock for ca certs: {Name:mkd7f96dc3b50acb1e4b9ffed31996dfe6eec0f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:11:32.206246    1833 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/ca.key
	I0311 13:11:32.416669    1833 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18358-1220/.minikube/ca.crt ...
	I0311 13:11:32.416686    1833 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/ca.crt: {Name:mk34f8573df5d3777fadf6ceb41b73866313923e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:11:32.417111    1833 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18358-1220/.minikube/ca.key ...
	I0311 13:11:32.417117    1833 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/ca.key: {Name:mk7accb838db84aca20cb936bd1f7c8fc9a90953 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:11:32.417414    1833 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/proxy-client-ca.key
	I0311 13:11:32.459905    1833 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18358-1220/.minikube/proxy-client-ca.crt ...
	I0311 13:11:32.459916    1833 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/proxy-client-ca.crt: {Name:mk5b3662155ed57cc683ace5017bc6659ab748f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:11:32.460163    1833 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18358-1220/.minikube/proxy-client-ca.key ...
	I0311 13:11:32.460168    1833 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/proxy-client-ca.key: {Name:mk683b18de708a3d845f901893d9776f32132c9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:11:32.460284    1833 certs.go:256] generating profile certs ...
	I0311 13:11:32.460320    1833 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.key
	I0311 13:11:32.460327    1833 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt with IP's: []
	I0311 13:11:32.524398    1833 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt ...
	I0311 13:11:32.524402    1833 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: {Name:mkecab0eddb1f1e79376e30ab29077ad3fc5599d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:11:32.524546    1833 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.key ...
	I0311 13:11:32.524549    1833 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.key: {Name:mk3969cf931f63e8132d21d5dee9d3300a2f2c6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:11:32.524649    1833 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/apiserver.key.1a15ceab
	I0311 13:11:32.524658    1833 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/apiserver.crt.1a15ceab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0311 13:11:32.715306    1833 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/apiserver.crt.1a15ceab ...
	I0311 13:11:32.715311    1833 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/apiserver.crt.1a15ceab: {Name:mk0526195bab061b60c1d7ccb0d605b3d595e2a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:11:32.715475    1833 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/apiserver.key.1a15ceab ...
	I0311 13:11:32.715479    1833 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/apiserver.key.1a15ceab: {Name:mk90f295e620047fbcbf16ca4a22d6764c63d1ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:11:32.715583    1833 certs.go:381] copying /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/apiserver.crt.1a15ceab -> /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/apiserver.crt
	I0311 13:11:32.715869    1833 certs.go:385] copying /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/apiserver.key.1a15ceab -> /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/apiserver.key
	I0311 13:11:32.715989    1833 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/proxy-client.key
	I0311 13:11:32.716002    1833 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/proxy-client.crt with IP's: []
	I0311 13:11:32.956391    1833 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/proxy-client.crt ...
	I0311 13:11:32.956403    1833 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/proxy-client.crt: {Name:mk198a360e95308d1eb3e17a7c8aace950e97875 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:11:32.956647    1833 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/proxy-client.key ...
	I0311 13:11:32.956654    1833 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/proxy-client.key: {Name:mk0b569fcd0ef0caf14bc3ca87517631223b4b3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:11:32.956898    1833 certs.go:484] found cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 13:11:32.956928    1833 certs.go:484] found cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem (1082 bytes)
	I0311 13:11:32.956952    1833 certs.go:484] found cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem (1123 bytes)
	I0311 13:11:32.956977    1833 certs.go:484] found cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/key.pem (1675 bytes)
	I0311 13:11:32.957297    1833 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 13:11:32.966088    1833 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 13:11:32.974093    1833 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 13:11:32.982175    1833 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0311 13:11:32.990282    1833 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0311 13:11:32.998517    1833 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 13:11:33.006774    1833 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 13:11:33.015006    1833 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 13:11:33.023240    1833 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 13:11:33.031501    1833 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 13:11:33.038531    1833 ssh_runner.go:195] Run: openssl version
	I0311 13:11:33.040603    1833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 13:11:33.044202    1833 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:11:33.045752    1833 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:11 /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:11:33.045781    1833 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:11:33.047611    1833 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 13:11:33.051552    1833 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 13:11:33.052957    1833 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0311 13:11:33.052984    1833 kubeadm.go:391] StartCluster: {Name:addons-212000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:addons-212000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:11:33.053050    1833 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0311 13:11:33.065464    1833 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0311 13:11:33.068824    1833 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 13:11:33.072057    1833 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 13:11:33.075503    1833 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 13:11:33.075509    1833 kubeadm.go:156] found existing configuration files:
	
	I0311 13:11:33.075529    1833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 13:11:33.078943    1833 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 13:11:33.078964    1833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 13:11:33.082510    1833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 13:11:33.085944    1833 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 13:11:33.085970    1833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 13:11:33.089217    1833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 13:11:33.092130    1833 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 13:11:33.092150    1833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 13:11:33.095455    1833 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 13:11:33.098977    1833 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 13:11:33.098996    1833 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 13:11:33.102422    1833 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 13:11:33.127098    1833 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0311 13:11:33.127128    1833 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 13:11:33.176215    1833 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 13:11:33.176279    1833 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 13:11:33.176333    1833 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 13:11:33.280952    1833 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 13:11:33.294114    1833 out.go:204]   - Generating certificates and keys ...
	I0311 13:11:33.294149    1833 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 13:11:33.294199    1833 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 13:11:33.332245    1833 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0311 13:11:33.501267    1833 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0311 13:11:33.542429    1833 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0311 13:11:33.670049    1833 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0311 13:11:33.857342    1833 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0311 13:11:33.857412    1833 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-212000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0311 13:11:33.916545    1833 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0311 13:11:33.916666    1833 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-212000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0311 13:11:34.134968    1833 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0311 13:11:34.251413    1833 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0311 13:11:34.314371    1833 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0311 13:11:34.314400    1833 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 13:11:34.384005    1833 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 13:11:34.429574    1833 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 13:11:34.543528    1833 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 13:11:34.716335    1833 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 13:11:34.716518    1833 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 13:11:34.718420    1833 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 13:11:34.726555    1833 out.go:204]   - Booting up control plane ...
	I0311 13:11:34.726599    1833 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 13:11:34.726638    1833 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 13:11:34.726678    1833 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 13:11:34.726744    1833 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 13:11:34.726825    1833 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 13:11:34.726855    1833 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 13:11:34.820217    1833 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 13:11:38.821200    1833 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.001156 seconds
	I0311 13:11:38.821271    1833 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 13:11:38.826548    1833 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 13:11:39.345259    1833 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 13:11:39.345361    1833 kubeadm.go:309] [mark-control-plane] Marking the node addons-212000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 13:11:39.851059    1833 kubeadm.go:309] [bootstrap-token] Using token: 0csywe.nkf383en2mqj97yc
	I0311 13:11:39.863362    1833 out.go:204]   - Configuring RBAC rules ...
	I0311 13:11:39.863416    1833 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 13:11:39.863462    1833 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 13:11:39.865060    1833 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 13:11:39.866214    1833 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 13:11:39.867498    1833 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 13:11:39.868705    1833 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 13:11:39.872888    1833 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 13:11:40.056024    1833 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 13:11:40.260774    1833 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 13:11:40.261226    1833 kubeadm.go:309] 
	I0311 13:11:40.261257    1833 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 13:11:40.261261    1833 kubeadm.go:309] 
	I0311 13:11:40.261304    1833 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 13:11:40.261312    1833 kubeadm.go:309] 
	I0311 13:11:40.261325    1833 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 13:11:40.261365    1833 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 13:11:40.261413    1833 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 13:11:40.261420    1833 kubeadm.go:309] 
	I0311 13:11:40.261456    1833 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 13:11:40.261458    1833 kubeadm.go:309] 
	I0311 13:11:40.261483    1833 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 13:11:40.261486    1833 kubeadm.go:309] 
	I0311 13:11:40.261515    1833 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 13:11:40.261570    1833 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 13:11:40.261612    1833 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 13:11:40.261616    1833 kubeadm.go:309] 
	I0311 13:11:40.261676    1833 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 13:11:40.261726    1833 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 13:11:40.261732    1833 kubeadm.go:309] 
	I0311 13:11:40.261774    1833 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0csywe.nkf383en2mqj97yc \
	I0311 13:11:40.261831    1833 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b0f804fee3777fe090204338c70c85586d2b95499c0fea24e08ef3935500f54b \
	I0311 13:11:40.261853    1833 kubeadm.go:309] 	--control-plane 
	I0311 13:11:40.261855    1833 kubeadm.go:309] 
	I0311 13:11:40.261897    1833 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 13:11:40.261900    1833 kubeadm.go:309] 
	I0311 13:11:40.261953    1833 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0csywe.nkf383en2mqj97yc \
	I0311 13:11:40.262009    1833 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b0f804fee3777fe090204338c70c85586d2b95499c0fea24e08ef3935500f54b 
	I0311 13:11:40.262061    1833 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 13:11:40.262070    1833 cni.go:84] Creating CNI manager for ""
	I0311 13:11:40.262079    1833 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 13:11:40.266673    1833 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 13:11:40.272648    1833 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 13:11:40.276236    1833 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 13:11:40.281635    1833 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 13:11:40.281681    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:40.281684    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-212000 minikube.k8s.io/updated_at=2024_03_11T13_11_40_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=addons-212000 minikube.k8s.io/primary=true
	I0311 13:11:40.331005    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:40.353627    1833 ops.go:34] apiserver oom_adj: -16
	I0311 13:11:40.833064    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:41.333036    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:41.833089    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:42.333029    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:42.832999    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:43.333078    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:43.831636    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:44.332981    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:44.832988    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:45.333003    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:45.832936    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:46.332921    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:46.832939    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:47.332915    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:47.832883    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:48.332920    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:48.832888    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:49.332870    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:49.832859    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:50.332804    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:50.832817    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:51.332512    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:51.832836    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:52.332735    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:52.832735    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:53.332699    1833 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:11:53.365926    1833 kubeadm.go:1106] duration metric: took 13.084643875s to wait for elevateKubeSystemPrivileges
	W0311 13:11:53.365962    1833 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 13:11:53.365967    1833 kubeadm.go:393] duration metric: took 20.31355225s to StartCluster
	I0311 13:11:53.365977    1833 settings.go:142] acquiring lock: {Name:mkde8963c2fec7d8df74a4e81a4ba3233d320136 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:11:53.366138    1833 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:11:53.366358    1833 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/kubeconfig: {Name:mkd61d3fa94ba0392c00bb2cce43bcec89e45a02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:11:53.366584    1833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0311 13:11:53.366610    1833 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 13:11:53.370316    1833 out.go:177] * Verifying Kubernetes components...
	I0311 13:11:53.366640    1833 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0311 13:11:53.366832    1833 config.go:182] Loaded profile config "addons-212000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:11:53.378347    1833 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:11:53.378408    1833 addons.go:69] Setting cloud-spanner=true in profile "addons-212000"
	I0311 13:11:53.378409    1833 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-212000"
	I0311 13:11:53.378446    1833 addons.go:69] Setting default-storageclass=true in profile "addons-212000"
	I0311 13:11:53.378450    1833 addons.go:69] Setting yakd=true in profile "addons-212000"
	I0311 13:11:53.378457    1833 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-212000"
	I0311 13:11:53.378458    1833 addons.go:234] Setting addon yakd=true in "addons-212000"
	I0311 13:11:53.378485    1833 addons.go:69] Setting gcp-auth=true in profile "addons-212000"
	I0311 13:11:53.378439    1833 addons.go:234] Setting addon cloud-spanner=true in "addons-212000"
	I0311 13:11:53.378513    1833 addons.go:69] Setting volumesnapshots=true in profile "addons-212000"
	I0311 13:11:53.378502    1833 addons.go:69] Setting inspektor-gadget=true in profile "addons-212000"
	I0311 13:11:53.378511    1833 addons.go:69] Setting ingress-dns=true in profile "addons-212000"
	I0311 13:11:53.378527    1833 addons.go:234] Setting addon volumesnapshots=true in "addons-212000"
	I0311 13:11:53.378530    1833 host.go:66] Checking if "addons-212000" exists ...
	I0311 13:11:53.378535    1833 host.go:66] Checking if "addons-212000" exists ...
	I0311 13:11:53.378542    1833 addons.go:234] Setting addon ingress-dns=true in "addons-212000"
	I0311 13:11:53.378545    1833 host.go:66] Checking if "addons-212000" exists ...
	I0311 13:11:53.378443    1833 addons.go:69] Setting ingress=true in profile "addons-212000"
	I0311 13:11:53.378576    1833 mustload.go:65] Loading cluster: addons-212000
	I0311 13:11:53.378480    1833 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-212000"
	I0311 13:11:53.378496    1833 addons.go:69] Setting storage-provisioner=true in profile "addons-212000"
	I0311 13:11:53.378608    1833 addons.go:234] Setting addon storage-provisioner=true in "addons-212000"
	I0311 13:11:53.378620    1833 host.go:66] Checking if "addons-212000" exists ...
	I0311 13:11:53.378623    1833 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-212000"
	I0311 13:11:53.378637    1833 host.go:66] Checking if "addons-212000" exists ...
	I0311 13:11:53.378574    1833 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-212000"
	I0311 13:11:53.378746    1833 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-212000"
	I0311 13:11:53.378492    1833 addons.go:69] Setting metrics-server=true in profile "addons-212000"
	I0311 13:11:53.378789    1833 addons.go:234] Setting addon metrics-server=true in "addons-212000"
	I0311 13:11:53.378797    1833 host.go:66] Checking if "addons-212000" exists ...
	I0311 13:11:53.378889    1833 config.go:182] Loaded profile config "addons-212000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:11:53.378531    1833 addons.go:234] Setting addon inspektor-gadget=true in "addons-212000"
	I0311 13:11:53.378982    1833 host.go:66] Checking if "addons-212000" exists ...
	I0311 13:11:53.378982    1833 retry.go:31] will retry after 1.107590663s: connect: dial unix /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/monitor: connect: connection refused
	I0311 13:11:53.378596    1833 host.go:66] Checking if "addons-212000" exists ...
	I0311 13:11:53.379002    1833 retry.go:31] will retry after 607.489602ms: connect: dial unix /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/monitor: connect: connection refused
	I0311 13:11:53.379005    1833 retry.go:31] will retry after 1.300546424s: connect: dial unix /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/monitor: connect: connection refused
	I0311 13:11:53.378498    1833 addons.go:69] Setting registry=true in profile "addons-212000"
	I0311 13:11:53.379135    1833 retry.go:31] will retry after 877.966463ms: connect: dial unix /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/monitor: connect: connection refused
	I0311 13:11:53.379148    1833 retry.go:31] will retry after 1.050355481s: connect: dial unix /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/monitor: connect: connection refused
	I0311 13:11:53.379156    1833 addons.go:234] Setting addon registry=true in "addons-212000"
	I0311 13:11:53.378581    1833 addons.go:234] Setting addon ingress=true in "addons-212000"
	I0311 13:11:53.379164    1833 host.go:66] Checking if "addons-212000" exists ...
	I0311 13:11:53.379173    1833 host.go:66] Checking if "addons-212000" exists ...
	I0311 13:11:53.379184    1833 retry.go:31] will retry after 777.540796ms: connect: dial unix /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/monitor: connect: connection refused
	I0311 13:11:53.379186    1833 retry.go:31] will retry after 1.077725645s: connect: dial unix /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/monitor: connect: connection refused
	I0311 13:11:53.379217    1833 retry.go:31] will retry after 1.224853652s: connect: dial unix /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/monitor: connect: connection refused
	I0311 13:11:53.378510    1833 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-212000"
	I0311 13:11:53.379260    1833 host.go:66] Checking if "addons-212000" exists ...
	I0311 13:11:53.379355    1833 retry.go:31] will retry after 1.128587502s: connect: dial unix /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/monitor: connect: connection refused
	I0311 13:11:53.379361    1833 retry.go:31] will retry after 1.107037107s: connect: dial unix /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/monitor: connect: connection refused
	I0311 13:11:53.379472    1833 retry.go:31] will retry after 1.286498133s: connect: dial unix /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/monitor: connect: connection refused
	I0311 13:11:53.379648    1833 retry.go:31] will retry after 1.091273699s: connect: dial unix /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/monitor: connect: connection refused
	I0311 13:11:53.384271    1833 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0311 13:11:53.388312    1833 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0311 13:11:53.388349    1833 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0311 13:11:53.389503    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0311 13:11:53.389511    1833 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/id_rsa Username:docker}
	I0311 13:11:53.389524    1833 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0311 13:11:53.389528    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0311 13:11:53.389533    1833 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/id_rsa Username:docker}
	I0311 13:11:53.424663    1833 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0311 13:11:53.476241    1833 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 13:11:53.540414    1833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0311 13:11:53.573164    1833 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0311 13:11:53.573179    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0311 13:11:53.605713    1833 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0311 13:11:53.605726    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0311 13:11:53.611704    1833 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0311 13:11:53.611715    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0311 13:11:53.618238    1833 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0311 13:11:53.618248    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0311 13:11:53.627925    1833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0311 13:11:53.990646    1833 addons.go:234] Setting addon default-storageclass=true in "addons-212000"
	I0311 13:11:53.990672    1833 host.go:66] Checking if "addons-212000" exists ...
	I0311 13:11:53.991417    1833 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 13:11:53.991424    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 13:11:53.991429    1833 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/id_rsa Username:docker}
	I0311 13:11:54.144039    1833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 13:11:54.164386    1833 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0311 13:11:54.168327    1833 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0311 13:11:54.168338    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0311 13:11:54.168348    1833 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/id_rsa Username:docker}
	I0311 13:11:54.227809    1833 start.go:948] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0311 13:11:54.228246    1833 node_ready.go:35] waiting up to 6m0s for node "addons-212000" to be "Ready" ...
	I0311 13:11:54.238907    1833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0311 13:11:54.244387    1833 node_ready.go:49] node "addons-212000" has status "Ready":"True"
	I0311 13:11:54.244396    1833 node_ready.go:38] duration metric: took 16.140125ms for node "addons-212000" to be "Ready" ...
	I0311 13:11:54.244400    1833 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 13:11:54.258191    1833 host.go:66] Checking if "addons-212000" exists ...
	I0311 13:11:54.266156    1833 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dkvfp" in "kube-system" namespace to be "Ready" ...
	I0311 13:11:54.472421    1833 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0311 13:11:54.475393    1833 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0311 13:11:54.481290    1833 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0311 13:11:54.485273    1833 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0311 13:11:54.488296    1833 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0311 13:11:54.492301    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0311 13:11:54.492314    1833 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/id_rsa Username:docker}
	I0311 13:11:54.497221    1833 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0311 13:11:54.485281    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0311 13:11:54.507377    1833 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 13:11:54.507426    1833 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/id_rsa Username:docker}
	I0311 13:11:54.513302    1833 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0311 13:11:54.518324    1833 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0311 13:11:54.522263    1833 out.go:177]   - Using image docker.io/registry:2.8.3
	I0311 13:11:54.525338    1833 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 13:11:54.532283    1833 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0311 13:11:54.539297    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 13:11:54.542275    1833 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-212000 service yakd-dashboard -n yakd-dashboard
	
	I0311 13:11:54.548618    1833 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0311 13:11:54.548633    1833 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/id_rsa Username:docker}
	I0311 13:11:54.560314    1833 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0311 13:11:54.557358    1833 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0311 13:11:54.571153    1833 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0311 13:11:54.563432    1833 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0311 13:11:54.567350    1833 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0311 13:11:54.569326    1833 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0311 13:11:54.575229    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0311 13:11:54.578196    1833 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0311 13:11:54.575283    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0311 13:11:54.575296    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0311 13:11:54.581266    1833 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/id_rsa Username:docker}
	I0311 13:11:54.581292    1833 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/id_rsa Username:docker}
	I0311 13:11:54.584248    1833 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0311 13:11:54.593235    1833 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0311 13:11:54.598286    1833 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0311 13:11:54.598297    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0311 13:11:54.598308    1833 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/id_rsa Username:docker}
	I0311 13:11:54.601681    1833 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0311 13:11:54.601692    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0311 13:11:54.605169    1833 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-212000"
	I0311 13:11:54.605193    1833 host.go:66] Checking if "addons-212000" exists ...
	I0311 13:11:54.609094    1833 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0311 13:11:54.616219    1833 out.go:177]   - Using image docker.io/busybox:stable
	I0311 13:11:54.620348    1833 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0311 13:11:54.620360    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0311 13:11:54.620371    1833 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/id_rsa Username:docker}
	I0311 13:11:54.635897    1833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 13:11:54.641728    1833 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0311 13:11:54.641738    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0311 13:11:54.644585    1833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0311 13:11:54.669232    1833 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0311 13:11:54.673306    1833 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0311 13:11:54.673315    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0311 13:11:54.673326    1833 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/id_rsa Username:docker}
	I0311 13:11:54.684232    1833 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0311 13:11:54.688257    1833 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 13:11:54.688269    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 13:11:54.688281    1833 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/id_rsa Username:docker}
	I0311 13:11:54.688545    1833 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0311 13:11:54.688553    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0311 13:11:54.689625    1833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0311 13:11:54.699452    1833 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0311 13:11:54.699462    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0311 13:11:54.700740    1833 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0311 13:11:54.700746    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0311 13:11:54.723640    1833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0311 13:11:54.731216    1833 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-212000" context rescaled to 1 replicas
	I0311 13:11:54.743004    1833 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0311 13:11:54.743015    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0311 13:11:54.760032    1833 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0311 13:11:54.760044    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0311 13:11:54.764113    1833 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0311 13:11:54.764122    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0311 13:11:54.781909    1833 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0311 13:11:54.781918    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0311 13:11:54.815421    1833 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0311 13:11:54.815432    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0311 13:11:54.816587    1833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0311 13:11:54.820926    1833 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0311 13:11:54.820939    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0311 13:11:54.831751    1833 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 13:11:54.831760    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0311 13:11:54.839994    1833 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0311 13:11:54.840004    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0311 13:11:54.844733    1833 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0311 13:11:54.844740    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0311 13:11:54.847855    1833 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0311 13:11:54.847863    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0311 13:11:54.902304    1833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0311 13:11:54.925877    1833 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0311 13:11:54.925891    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0311 13:11:54.929725    1833 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 13:11:54.929734    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 13:11:54.940873    1833 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0311 13:11:54.940886    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0311 13:11:54.997040    1833 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0311 13:11:54.997052    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0311 13:11:55.024041    1833 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0311 13:11:55.024055    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0311 13:11:55.083389    1833 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 13:11:55.083405    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 13:11:55.094984    1833 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0311 13:11:55.094995    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0311 13:11:55.148184    1833 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0311 13:11:55.148196    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0311 13:11:55.176798    1833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 13:11:55.177895    1833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0311 13:11:55.186126    1833 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0311 13:11:55.186136    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0311 13:11:55.273525    1833 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0311 13:11:55.273535    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0311 13:11:55.378668    1833 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0311 13:11:55.378680    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0311 13:11:55.499441    1833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0311 13:11:55.638057    1833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.002171209s)
	I0311 13:11:56.270068    1833 pod_ready.go:102] pod "coredns-5dd5756b68-dkvfp" in "kube-system" namespace has status "Ready":"False"
	I0311 13:11:57.294437    1833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (2.570851583s)
	I0311 13:11:57.294459    1833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.604897167s)
	I0311 13:11:57.294483    1833 addons.go:470] Verifying addon ingress=true in "addons-212000"
	I0311 13:11:57.299252    1833 out.go:177] * Verifying ingress addon...
	I0311 13:11:57.294554    1833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.478025709s)
	I0311 13:11:57.294578    1833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (2.392328458s)
	I0311 13:11:57.294626    1833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.117878458s)
	I0311 13:11:57.294664    1833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.116821292s)
	I0311 13:11:57.299283    1833 addons.go:470] Verifying addon registry=true in "addons-212000"
	W0311 13:11:57.299292    1833 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0311 13:11:57.299305    1833 retry.go:31] will retry after 130.229748ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0311 13:11:57.299307    1833 addons.go:470] Verifying addon metrics-server=true in "addons-212000"
	I0311 13:11:57.309219    1833 out.go:177] * Verifying registry addon...
	I0311 13:11:57.317659    1833 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0311 13:11:57.320555    1833 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0311 13:11:57.341249    1833 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0311 13:11:57.341258    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:11:57.342898    1833 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0311 13:11:57.342904    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:11:57.431040    1833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0311 13:11:57.835797    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:11:57.841774    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:11:57.937684    1833 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.438281667s)
	I0311 13:11:57.937705    1833 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-212000"
	I0311 13:11:57.948202    1833 out.go:177] * Verifying csi-hostpath-driver addon...
	I0311 13:11:57.951593    1833 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0311 13:11:58.001501    1833 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0311 13:11:58.001511    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:11:58.270091    1833 pod_ready.go:102] pod "coredns-5dd5756b68-dkvfp" in "kube-system" namespace has status "Ready":"False"
	I0311 13:11:58.325434    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:11:58.325513    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:11:58.456528    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:11:58.824409    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:11:58.824531    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:11:58.956341    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:11:59.322686    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:11:59.323495    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:11:59.456224    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:11:59.824224    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:11:59.824503    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:11:59.956136    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:00.324712    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:00.324900    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:00.456089    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:00.772475    1833 pod_ready.go:102] pod "coredns-5dd5756b68-dkvfp" in "kube-system" namespace has status "Ready":"False"
	I0311 13:12:00.824491    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:00.824610    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:00.865220    1833 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0311 13:12:00.865234    1833 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/id_rsa Username:docker}
	I0311 13:12:00.900472    1833 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0311 13:12:00.915859    1833 addons.go:234] Setting addon gcp-auth=true in "addons-212000"
	I0311 13:12:00.915886    1833 host.go:66] Checking if "addons-212000" exists ...
	I0311 13:12:00.916817    1833 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0311 13:12:00.916824    1833 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/addons-212000/id_rsa Username:docker}
	I0311 13:12:00.946291    1833 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0311 13:12:00.950183    1833 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0311 13:12:00.954298    1833 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0311 13:12:00.954307    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0311 13:12:00.955036    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:00.965430    1833 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0311 13:12:00.965440    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0311 13:12:00.977214    1833 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0311 13:12:00.977223    1833 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0311 13:12:00.987086    1833 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0311 13:12:01.325047    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:01.325128    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:01.390433    1833 addons.go:470] Verifying addon gcp-auth=true in "addons-212000"
	I0311 13:12:01.395571    1833 out.go:177] * Verifying gcp-auth addon...
	I0311 13:12:01.403986    1833 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0311 13:12:01.414532    1833 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0311 13:12:01.414541    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:01.456128    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:01.853452    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:01.854598    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:01.907727    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:01.956234    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:02.324588    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:02.324755    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:02.407757    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:02.456025    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:02.824994    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:02.825119    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:02.906755    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:02.958248    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:03.270803    1833 pod_ready.go:102] pod "coredns-5dd5756b68-dkvfp" in "kube-system" namespace has status "Ready":"False"
	I0311 13:12:03.323868    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:03.324345    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:03.407845    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:03.455592    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:03.824171    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:03.824331    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:03.907392    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:03.955544    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:04.324065    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:04.324423    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:04.407937    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:04.456015    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:04.822521    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:04.822777    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:04.907286    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:04.954367    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:05.323839    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:05.324124    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:05.406049    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:05.455476    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:05.770951    1833 pod_ready.go:102] pod "coredns-5dd5756b68-dkvfp" in "kube-system" namespace has status "Ready":"False"
	I0311 13:12:05.824191    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:05.824374    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:05.907273    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:05.955560    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:06.323840    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:06.324466    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:06.407408    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:06.455379    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:06.825242    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:06.825372    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:06.907467    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:06.955322    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:07.324492    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:07.324583    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:07.406638    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:07.454870    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:07.824294    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:07.824336    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:07.907524    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:07.954667    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:08.270521    1833 pod_ready.go:102] pod "coredns-5dd5756b68-dkvfp" in "kube-system" namespace has status "Ready":"False"
	I0311 13:12:08.324421    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:08.324553    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:08.407629    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:08.455955    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:08.824458    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:08.824502    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:08.906217    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:08.956583    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:09.324148    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:09.324205    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:09.407519    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:09.455569    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:09.824129    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:09.824498    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:09.907146    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:09.955499    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:10.271129    1833 pod_ready.go:102] pod "coredns-5dd5756b68-dkvfp" in "kube-system" namespace has status "Ready":"False"
	I0311 13:12:10.324325    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:10.324446    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:10.405930    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:10.455960    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:10.823965    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:10.825474    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:10.907341    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:10.956173    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:11.323983    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:11.324185    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:11.407255    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:11.455707    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:11.824259    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:11.824324    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:11.907122    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:11.955646    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:12.324016    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:12.325254    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:12.405591    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:12.456088    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:12.770932    1833 pod_ready.go:102] pod "coredns-5dd5756b68-dkvfp" in "kube-system" namespace has status "Ready":"False"
	I0311 13:12:12.825081    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:12.825138    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:12.907483    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:12.955884    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:13.323578    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:13.323647    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:13.407570    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:13.456104    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:13.936091    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:13.936287    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:13.936296    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:13.955762    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:14.323931    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:14.324082    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:14.406928    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:14.456100    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:14.771145    1833 pod_ready.go:102] pod "coredns-5dd5756b68-dkvfp" in "kube-system" namespace has status "Ready":"False"
	I0311 13:12:14.822953    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:14.823078    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:14.907421    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:14.955679    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:15.323673    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:15.323907    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:15.405798    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:15.453919    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:15.823727    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:15.823830    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:15.905791    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:15.955822    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:16.323724    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:16.323742    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:16.407411    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:16.455378    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:16.823741    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:16.823857    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:16.907125    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:16.955523    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:17.270644    1833 pod_ready.go:102] pod "coredns-5dd5756b68-dkvfp" in "kube-system" namespace has status "Ready":"False"
	I0311 13:12:17.323951    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:17.324226    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:17.407130    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:17.455756    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:17.825472    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:17.825590    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:17.906613    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:17.953587    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:18.323273    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:18.323347    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:18.407008    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:18.455592    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:18.823430    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:18.823515    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:18.907068    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:18.955731    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:19.272032    1833 pod_ready.go:102] pod "coredns-5dd5756b68-dkvfp" in "kube-system" namespace has status "Ready":"False"
	I0311 13:12:19.323706    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:19.323759    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:19.407091    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:19.455432    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:19.822430    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:19.822472    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:19.907296    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:19.955477    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:20.324050    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:20.324206    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:20.406040    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:20.455484    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:20.823296    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:20.823508    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:20.907282    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:20.955287    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:21.323510    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:21.323932    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:21.407189    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:21.455264    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:21.771047    1833 pod_ready.go:102] pod "coredns-5dd5756b68-dkvfp" in "kube-system" namespace has status "Ready":"False"
	I0311 13:12:21.823358    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:21.823648    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:21.907335    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:21.955508    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:22.323711    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:22.323764    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:22.407200    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:22.455353    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:22.823732    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:22.823860    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:22.907048    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:22.955642    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:23.324119    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:23.324278    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:23.407041    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:23.455111    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:23.823488    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:23.823895    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:23.907057    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:23.955168    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:24.270666    1833 pod_ready.go:102] pod "coredns-5dd5756b68-dkvfp" in "kube-system" namespace has status "Ready":"False"
	I0311 13:12:24.323765    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:24.323898    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:24.407010    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:24.455369    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:24.823974    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:24.824049    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:24.907084    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:24.957044    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:25.323375    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:25.323463    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:25.405477    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:25.455240    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:25.824163    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:25.824237    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:25.905433    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:25.954770    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:26.323781    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:26.323894    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:26.405289    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:26.455641    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:26.770689    1833 pod_ready.go:102] pod "coredns-5dd5756b68-dkvfp" in "kube-system" namespace has status "Ready":"False"
	I0311 13:12:26.823497    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:26.823643    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:26.906826    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:26.955632    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:27.323953    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:27.323991    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:27.407248    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:27.455254    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:27.933738    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:27.933849    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:27.933947    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:27.955714    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:28.323373    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:28.323636    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:28.407497    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:28.455307    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:28.822876    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:28.822881    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:28.908334    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:28.954607    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:29.270441    1833 pod_ready.go:102] pod "coredns-5dd5756b68-dkvfp" in "kube-system" namespace has status "Ready":"False"
	I0311 13:12:29.324414    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:29.324861    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:29.407184    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:29.455492    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:29.823155    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:29.824104    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:29.907744    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:29.957211    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:30.323511    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:30.323597    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:30.405513    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:30.453180    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:30.823250    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:30.823487    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:30.905677    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:30.955272    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:31.323314    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:31.323386    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:31.406672    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:31.455141    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:31.770509    1833 pod_ready.go:102] pod "coredns-5dd5756b68-dkvfp" in "kube-system" namespace has status "Ready":"False"
	I0311 13:12:31.824054    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:31.824119    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:31.905442    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:31.955450    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:32.323738    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:32.323882    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:32.406676    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:32.454987    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:32.823130    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:32.823191    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:32.905061    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:32.955161    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:33.323311    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:33.323342    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:33.406571    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:33.454817    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:33.823323    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:33.823528    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:33.906993    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:33.953457    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:34.269300    1833 pod_ready.go:92] pod "coredns-5dd5756b68-dkvfp" in "kube-system" namespace has status "Ready":"True"
	I0311 13:12:34.269308    1833 pod_ready.go:81] duration metric: took 40.004259s for pod "coredns-5dd5756b68-dkvfp" in "kube-system" namespace to be "Ready" ...
	I0311 13:12:34.269312    1833 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kt65t" in "kube-system" namespace to be "Ready" ...
	I0311 13:12:34.270204    1833 pod_ready.go:97] error getting pod "coredns-5dd5756b68-kt65t" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-kt65t" not found
	I0311 13:12:34.270213    1833 pod_ready.go:81] duration metric: took 898.416µs for pod "coredns-5dd5756b68-kt65t" in "kube-system" namespace to be "Ready" ...
	E0311 13:12:34.270217    1833 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-kt65t" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-kt65t" not found
	I0311 13:12:34.270220    1833 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-212000" in "kube-system" namespace to be "Ready" ...
	I0311 13:12:34.272757    1833 pod_ready.go:92] pod "etcd-addons-212000" in "kube-system" namespace has status "Ready":"True"
	I0311 13:12:34.272762    1833 pod_ready.go:81] duration metric: took 2.539208ms for pod "etcd-addons-212000" in "kube-system" namespace to be "Ready" ...
	I0311 13:12:34.272766    1833 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-212000" in "kube-system" namespace to be "Ready" ...
	I0311 13:12:34.275676    1833 pod_ready.go:92] pod "kube-apiserver-addons-212000" in "kube-system" namespace has status "Ready":"True"
	I0311 13:12:34.275682    1833 pod_ready.go:81] duration metric: took 2.912166ms for pod "kube-apiserver-addons-212000" in "kube-system" namespace to be "Ready" ...
	I0311 13:12:34.275685    1833 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-212000" in "kube-system" namespace to be "Ready" ...
	I0311 13:12:34.277961    1833 pod_ready.go:92] pod "kube-controller-manager-addons-212000" in "kube-system" namespace has status "Ready":"True"
	I0311 13:12:34.277966    1833 pod_ready.go:81] duration metric: took 2.278ms for pod "kube-controller-manager-addons-212000" in "kube-system" namespace to be "Ready" ...
	I0311 13:12:34.277969    1833 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8wlj2" in "kube-system" namespace to be "Ready" ...
	I0311 13:12:34.323205    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:34.323244    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:34.406896    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:34.455569    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:34.469987    1833 pod_ready.go:92] pod "kube-proxy-8wlj2" in "kube-system" namespace has status "Ready":"True"
	I0311 13:12:34.469993    1833 pod_ready.go:81] duration metric: took 192.025792ms for pod "kube-proxy-8wlj2" in "kube-system" namespace to be "Ready" ...
	I0311 13:12:34.469998    1833 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-212000" in "kube-system" namespace to be "Ready" ...
	I0311 13:12:34.824079    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:34.824165    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:34.868476    1833 pod_ready.go:92] pod "kube-scheduler-addons-212000" in "kube-system" namespace has status "Ready":"True"
	I0311 13:12:34.868485    1833 pod_ready.go:81] duration metric: took 398.495ms for pod "kube-scheduler-addons-212000" in "kube-system" namespace to be "Ready" ...
	I0311 13:12:34.868489    1833 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-jr55d" in "kube-system" namespace to be "Ready" ...
	I0311 13:12:34.906766    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:34.955198    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:35.270234    1833 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-jr55d" in "kube-system" namespace has status "Ready":"True"
	I0311 13:12:35.270244    1833 pod_ready.go:81] duration metric: took 401.762083ms for pod "nvidia-device-plugin-daemonset-jr55d" in "kube-system" namespace to be "Ready" ...
	I0311 13:12:35.270247    1833 pod_ready.go:38] duration metric: took 41.026988542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 13:12:35.270257    1833 api_server.go:52] waiting for apiserver process to appear ...
	I0311 13:12:35.270324    1833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 13:12:35.277942    1833 api_server.go:72] duration metric: took 41.912492167s to wait for apiserver process to appear ...
	I0311 13:12:35.277952    1833 api_server.go:88] waiting for apiserver healthz status ...
	I0311 13:12:35.277958    1833 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0311 13:12:35.282047    1833 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0311 13:12:35.283013    1833 api_server.go:141] control plane version: v1.28.4
	I0311 13:12:35.283019    1833 api_server.go:131] duration metric: took 5.065208ms to wait for apiserver health ...
	I0311 13:12:35.283022    1833 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 13:12:35.321569    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:35.321628    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:35.405838    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:35.455129    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:35.473604    1833 system_pods.go:59] 17 kube-system pods found
	I0311 13:12:35.473613    1833 system_pods.go:61] "coredns-5dd5756b68-dkvfp" [45d2bef1-eeaf-4784-a749-19132dc73287] Running
	I0311 13:12:35.473617    1833 system_pods.go:61] "csi-hostpath-attacher-0" [d8185c6c-719e-4dfa-b4bc-426d4dddfeda] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0311 13:12:35.473620    1833 system_pods.go:61] "csi-hostpath-resizer-0" [faba8b36-e992-4ce2-8dd8-e17b339ad010] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0311 13:12:35.473623    1833 system_pods.go:61] "csi-hostpathplugin-ntks4" [c0f28b38-56b6-4fbd-80ed-01a1f4b844a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0311 13:12:35.473626    1833 system_pods.go:61] "etcd-addons-212000" [f8405c7a-eac9-438f-8f71-96bcba5d54af] Running
	I0311 13:12:35.473627    1833 system_pods.go:61] "kube-apiserver-addons-212000" [5f217d82-3930-453f-a1a6-195304fd45e3] Running
	I0311 13:12:35.473629    1833 system_pods.go:61] "kube-controller-manager-addons-212000" [7377afd9-6c9b-47ab-8043-7f61ff7d38a3] Running
	I0311 13:12:35.473632    1833 system_pods.go:61] "kube-ingress-dns-minikube" [6796c139-3d20-4c7b-9304-6e578f4598cf] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0311 13:12:35.473634    1833 system_pods.go:61] "kube-proxy-8wlj2" [fe55068c-e466-4489-ae17-caf24b9bcd93] Running
	I0311 13:12:35.473635    1833 system_pods.go:61] "kube-scheduler-addons-212000" [2cb80da9-c3d8-42fc-a29b-35192510c962] Running
	I0311 13:12:35.473638    1833 system_pods.go:61] "metrics-server-69cf46c98-jrn44" [88601ae3-d859-4f45-a68e-2b495200f659] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 13:12:35.473639    1833 system_pods.go:61] "nvidia-device-plugin-daemonset-jr55d" [4cf28a72-abc4-4e97-b74c-a9745af1ee62] Running
	I0311 13:12:35.473641    1833 system_pods.go:61] "registry-dwfq8" [25875c8f-9c9e-472a-8003-09b286771012] Running
	I0311 13:12:35.473643    1833 system_pods.go:61] "registry-proxy-j7scl" [39ea9ca9-f9f5-46f0-a2c9-6b2d012f26db] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0311 13:12:35.473646    1833 system_pods.go:61] "snapshot-controller-58dbcc7b99-gk8sn" [5a496bb4-9aa4-464e-a538-14c836db87d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0311 13:12:35.473650    1833 system_pods.go:61] "snapshot-controller-58dbcc7b99-k5j7v" [685cf454-7c78-477a-9548-ef9c09f453f6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0311 13:12:35.473652    1833 system_pods.go:61] "storage-provisioner" [5c39f471-5136-4999-8063-9788d3ed886e] Running
	I0311 13:12:35.473655    1833 system_pods.go:74] duration metric: took 190.635875ms to wait for pod list to return data ...
	I0311 13:12:35.473659    1833 default_sa.go:34] waiting for default service account to be created ...
	I0311 13:12:35.670045    1833 default_sa.go:45] found service account: "default"
	I0311 13:12:35.670056    1833 default_sa.go:55] duration metric: took 196.39875ms for default service account to be created ...
	I0311 13:12:35.670059    1833 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 13:12:35.824459    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:35.825047    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:35.873516    1833 system_pods.go:86] 17 kube-system pods found
	I0311 13:12:35.873531    1833 system_pods.go:89] "coredns-5dd5756b68-dkvfp" [45d2bef1-eeaf-4784-a749-19132dc73287] Running
	I0311 13:12:35.873535    1833 system_pods.go:89] "csi-hostpath-attacher-0" [d8185c6c-719e-4dfa-b4bc-426d4dddfeda] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0311 13:12:35.873538    1833 system_pods.go:89] "csi-hostpath-resizer-0" [faba8b36-e992-4ce2-8dd8-e17b339ad010] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0311 13:12:35.873541    1833 system_pods.go:89] "csi-hostpathplugin-ntks4" [c0f28b38-56b6-4fbd-80ed-01a1f4b844a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0311 13:12:35.873544    1833 system_pods.go:89] "etcd-addons-212000" [f8405c7a-eac9-438f-8f71-96bcba5d54af] Running
	I0311 13:12:35.873546    1833 system_pods.go:89] "kube-apiserver-addons-212000" [5f217d82-3930-453f-a1a6-195304fd45e3] Running
	I0311 13:12:35.873548    1833 system_pods.go:89] "kube-controller-manager-addons-212000" [7377afd9-6c9b-47ab-8043-7f61ff7d38a3] Running
	I0311 13:12:35.873552    1833 system_pods.go:89] "kube-ingress-dns-minikube" [6796c139-3d20-4c7b-9304-6e578f4598cf] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0311 13:12:35.873554    1833 system_pods.go:89] "kube-proxy-8wlj2" [fe55068c-e466-4489-ae17-caf24b9bcd93] Running
	I0311 13:12:35.873556    1833 system_pods.go:89] "kube-scheduler-addons-212000" [2cb80da9-c3d8-42fc-a29b-35192510c962] Running
	I0311 13:12:35.873562    1833 system_pods.go:89] "metrics-server-69cf46c98-jrn44" [88601ae3-d859-4f45-a68e-2b495200f659] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 13:12:35.873564    1833 system_pods.go:89] "nvidia-device-plugin-daemonset-jr55d" [4cf28a72-abc4-4e97-b74c-a9745af1ee62] Running
	I0311 13:12:35.873566    1833 system_pods.go:89] "registry-dwfq8" [25875c8f-9c9e-472a-8003-09b286771012] Running
	I0311 13:12:35.873570    1833 system_pods.go:89] "registry-proxy-j7scl" [39ea9ca9-f9f5-46f0-a2c9-6b2d012f26db] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0311 13:12:35.873575    1833 system_pods.go:89] "snapshot-controller-58dbcc7b99-gk8sn" [5a496bb4-9aa4-464e-a538-14c836db87d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0311 13:12:35.873578    1833 system_pods.go:89] "snapshot-controller-58dbcc7b99-k5j7v" [685cf454-7c78-477a-9548-ef9c09f453f6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0311 13:12:35.873580    1833 system_pods.go:89] "storage-provisioner" [5c39f471-5136-4999-8063-9788d3ed886e] Running
	I0311 13:12:35.873585    1833 system_pods.go:126] duration metric: took 203.5295ms to wait for k8s-apps to be running ...
	I0311 13:12:35.873590    1833 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 13:12:35.873642    1833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 13:12:35.879722    1833 system_svc.go:56] duration metric: took 6.129708ms WaitForService to wait for kubelet
	I0311 13:12:35.879735    1833 kubeadm.go:576] duration metric: took 42.514302292s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 13:12:35.879745    1833 node_conditions.go:102] verifying NodePressure condition ...
	I0311 13:12:35.906520    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:35.954459    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:36.068501    1833 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0311 13:12:36.068512    1833 node_conditions.go:123] node cpu capacity is 2
	I0311 13:12:36.068517    1833 node_conditions.go:105] duration metric: took 188.773875ms to run NodePressure ...
	I0311 13:12:36.068523    1833 start.go:240] waiting for startup goroutines ...
	I0311 13:12:36.323781    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:36.324321    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:36.406876    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:36.455236    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:36.823910    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:36.824124    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:36.906780    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:36.955279    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:37.324054    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:37.324179    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:37.405336    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:37.455274    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:37.883923    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:37.884092    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:37.906301    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:37.954931    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:38.323445    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:38.323525    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:38.406607    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:38.454796    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:38.823798    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 13:12:38.823916    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:38.906467    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:38.956304    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:39.323563    1833 kapi.go:107] duration metric: took 42.004180041s to wait for kubernetes.io/minikube-addons=registry ...
	I0311 13:12:39.323609    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:39.406806    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:39.453523    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:39.823502    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:39.906415    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:39.954832    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:40.323367    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:40.405517    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:40.455369    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:40.823348    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:40.906560    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:40.954910    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:41.323114    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:41.407079    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:41.453079    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:41.823238    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:41.906664    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:41.953514    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:42.323401    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:42.407097    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:42.455224    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:42.823299    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:42.906625    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:42.954992    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:43.323450    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:43.406851    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:43.455211    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:43.823545    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:43.906372    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:44.242735    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:44.323566    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:44.404933    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:44.455659    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:44.823337    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:44.906550    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:44.954703    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:45.323727    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:45.405273    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:45.454928    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:45.823559    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:45.906326    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:45.954448    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:46.322993    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:46.406365    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:46.454496    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:46.823116    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:46.906165    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:46.954729    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:47.322981    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:47.406575    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:47.454895    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:47.823584    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:47.906562    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:47.954597    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:48.322774    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:48.406458    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:48.454355    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:48.823288    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:48.906214    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:48.954741    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:49.325797    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:49.406314    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:49.454435    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:49.823227    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:49.908066    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:49.954436    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:50.322908    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:50.405168    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:50.454915    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:50.823381    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:50.905963    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:50.954503    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:51.322979    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:51.406081    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:51.454434    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:51.823260    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:51.906200    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:51.954756    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:52.322822    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:52.406449    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:52.455032    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:52.822973    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:52.906116    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:52.954315    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:53.323182    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:53.406463    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:53.454904    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:53.822336    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:53.906197    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:53.954863    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:54.324881    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:54.406408    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:54.454603    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:54.823001    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:54.905893    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:54.954470    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:55.323446    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:55.404824    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:55.454439    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:55.823280    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:55.905921    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:55.954414    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:56.322583    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:56.405691    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:56.454414    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:56.823102    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:56.906121    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:56.954490    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:57.322880    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:57.406402    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:57.454561    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:57.822779    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:57.905945    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:57.954477    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:58.322986    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:58.406125    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:58.454521    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:58.823213    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:58.906583    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:58.954939    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:59.322516    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:59.405834    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:59.454330    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:12:59.823058    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:12:59.905866    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:12:59.954133    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:00.322750    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:00.404681    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:00.454193    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:00.823200    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:00.905745    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:00.954239    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:01.323104    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:01.407029    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:01.455059    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:01.822714    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:01.904813    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:01.954511    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:02.322731    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:02.405994    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:02.454280    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:02.823476    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:02.905895    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:02.954398    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:03.322347    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:03.406225    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:03.453701    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:03.822782    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:03.905893    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:03.954079    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:04.322674    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:04.405789    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:04.454050    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:04.822763    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:04.905836    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:04.954598    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:05.322673    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:05.404045    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:05.453959    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:05.822515    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:05.905610    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:05.954065    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:06.322646    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:06.406880    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:06.453978    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:06.822618    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:06.905986    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:06.954676    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:07.322872    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:07.405648    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:07.454127    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:07.822637    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:07.906107    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:07.958698    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:08.322701    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:08.405408    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:08.453979    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:08.823213    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:08.905672    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:08.954014    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:09.322453    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:09.405593    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:09.454018    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:09.822805    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:09.905628    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:09.953889    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:10.322506    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:10.404514    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:10.454873    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:10.822677    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:10.905567    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:10.953841    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:11.322609    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:11.403915    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:11.453875    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:11.822237    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:11.905625    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:11.954250    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:12.322411    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:12.405707    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:12.454075    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:12.822566    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:12.926047    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:12.952465    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:13.322578    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:13.405552    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:13.453914    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:13.822875    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:13.905725    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:13.953919    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:14.322538    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:14.405972    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:14.454168    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:14.822558    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:14.905911    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:14.954353    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:15.322335    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:15.404215    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:15.453715    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:15.822441    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:15.905631    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:15.955279    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:16.322653    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:16.405449    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:16.453653    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:16.822661    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:16.905310    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:16.953909    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:17.322164    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:17.405347    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:17.453901    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:17.822316    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:17.905273    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:17.954197    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:18.322132    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:18.405588    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:18.453538    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:18.822607    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:18.905593    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:18.953937    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:19.322232    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:19.406052    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:19.454304    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:19.822353    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:19.904737    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:19.953853    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:20.322439    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:20.404616    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:20.453862    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:20.822258    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:20.905173    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:20.953891    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:21.322194    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:21.405206    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:21.453581    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:21.822338    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:21.905154    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:21.954185    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:22.322408    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:22.405416    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:22.454064    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:22.822092    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:22.905382    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:22.953976    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:23.322168    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:23.405615    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:23.453975    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:23.822219    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:23.905282    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:23.954152    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:24.322105    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:24.406635    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:24.563279    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:24.824099    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:24.905350    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:24.953584    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:25.322366    1833 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 13:13:25.404030    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:25.453558    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:25.822162    1833 kapi.go:107] duration metric: took 1m28.506976666s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0311 13:13:25.905189    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:25.954045    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:26.405356    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:26.452005    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:26.905191    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:26.952340    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:27.405552    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:27.451537    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:27.905340    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:27.952388    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:28.405424    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:28.453482    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:28.905364    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:28.952008    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:29.405636    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:29.453455    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:29.903803    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:29.953306    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:30.403801    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:30.451542    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:30.903927    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:30.952239    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:31.405176    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:31.452969    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:31.905988    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:31.954249    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:32.405149    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:32.471602    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:32.905063    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:32.952647    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:33.404397    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:33.452121    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:33.905116    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:33.951376    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:34.405332    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:34.452258    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:34.905074    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:34.952496    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:35.404645    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:35.452187    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:35.904974    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:35.952140    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:36.405039    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:36.452452    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:36.903985    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:36.952212    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:37.405183    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:37.452221    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:37.905104    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:37.951897    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:38.404175    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:38.451916    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:38.904966    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:38.951444    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:39.405206    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:39.451937    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:39.904893    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:39.951328    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:40.404071    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:40.452952    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:40.905157    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:40.951597    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:41.404872    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:41.457249    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:41.904885    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:41.953219    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:42.404814    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:42.451081    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:42.904688    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:42.951095    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:43.405342    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:43.452075    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:43.904884    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:43.951938    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:44.404621    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:44.451065    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:44.904798    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:44.951090    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:45.403617    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:45.452220    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:45.903021    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:45.952884    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:46.402776    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:46.452292    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 13:13:46.904940    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:46.951421    1833 kapi.go:107] duration metric: took 1m49.0028705s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0311 13:13:47.405350    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:47.904844    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:48.404832    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:48.904696    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:49.405146    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:49.904635    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:50.402871    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:50.904676    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:51.404628    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:51.904709    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:52.404518    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:52.904548    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:53.404908    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:53.904377    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:54.404806    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:54.904685    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:55.403751    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:55.904478    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:56.404755    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:56.904531    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:57.404690    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:57.904499    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:58.404509    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:58.904407    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:59.404465    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:13:59.904319    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:00.403300    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:00.904309    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:01.404494    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:01.904546    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:02.404629    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:02.904065    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:03.404349    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:03.903992    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:04.404274    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:04.904146    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:05.402704    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:05.904189    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:06.404546    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:06.904160    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:07.404346    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:07.904194    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:08.404508    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:08.903347    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:09.404257    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:09.904286    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:10.403007    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:10.904035    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:11.404197    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:11.903421    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:12.403648    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:12.903958    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:13.405254    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:13.903126    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:14.403217    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:14.904023    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:15.402731    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:15.903750    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:16.403549    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:16.902862    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:17.402750    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:17.903989    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:18.404095    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:18.903924    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:19.404170    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:19.904628    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:20.401919    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:20.902169    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:21.403772    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:21.903671    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:22.403409    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:22.903747    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:23.404118    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:23.902412    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:24.403878    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:24.903581    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:25.401770    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:25.903734    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:26.403654    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:26.903455    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:27.403719    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:27.903441    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:28.403874    1833 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 13:14:28.903729    1833 kapi.go:107] duration metric: took 2m27.503861667s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0311 13:14:28.908885    1833 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-212000 cluster.
	I0311 13:14:28.918694    1833 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0311 13:14:28.922841    1833 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0311 13:14:28.925941    1833 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, yakd, ingress-dns, storage-provisioner, cloud-spanner, inspektor-gadget, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0311 13:14:28.929825    1833 addons.go:505] duration metric: took 2m35.567543541s for enable addons: enabled=[nvidia-device-plugin default-storageclass yakd ingress-dns storage-provisioner cloud-spanner inspektor-gadget metrics-server storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0311 13:14:28.929838    1833 start.go:245] waiting for cluster config update ...
	I0311 13:14:28.929848    1833 start.go:254] writing updated cluster config ...
	I0311 13:14:28.930353    1833 ssh_runner.go:195] Run: rm -f paused
	I0311 13:14:29.080231    1833 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0311 13:14:29.084872    1833 out.go:177] * Done! kubectl is now configured to use "addons-212000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 11 20:15:31 addons-212000 dockerd[1120]: time="2024-03-11T20:15:31.785799272Z" level=info msg="shim disconnected" id=6e2f017ef4558dd4de8ce5fae89b725baea2c6f2b7c7a226198a87576bd55f79 namespace=moby
	Mar 11 20:15:31 addons-212000 dockerd[1114]: time="2024-03-11T20:15:31.785948702Z" level=info msg="ignoring event" container=6e2f017ef4558dd4de8ce5fae89b725baea2c6f2b7c7a226198a87576bd55f79 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 11 20:15:31 addons-212000 dockerd[1120]: time="2024-03-11T20:15:31.786256603Z" level=warning msg="cleaning up after shim disconnected" id=6e2f017ef4558dd4de8ce5fae89b725baea2c6f2b7c7a226198a87576bd55f79 namespace=moby
	Mar 11 20:15:31 addons-212000 dockerd[1120]: time="2024-03-11T20:15:31.786265937Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 11 20:15:32 addons-212000 dockerd[1114]: time="2024-03-11T20:15:32.591202439Z" level=info msg="ignoring event" container=ae14f3a742540a994450e5feb49f584873de6af55363e79e4a32b8d5a8843e7a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 11 20:15:32 addons-212000 dockerd[1120]: time="2024-03-11T20:15:32.591370911Z" level=info msg="shim disconnected" id=ae14f3a742540a994450e5feb49f584873de6af55363e79e4a32b8d5a8843e7a namespace=moby
	Mar 11 20:15:32 addons-212000 dockerd[1120]: time="2024-03-11T20:15:32.591402914Z" level=warning msg="cleaning up after shim disconnected" id=ae14f3a742540a994450e5feb49f584873de6af55363e79e4a32b8d5a8843e7a namespace=moby
	Mar 11 20:15:32 addons-212000 dockerd[1120]: time="2024-03-11T20:15:32.591407289Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 11 20:15:33 addons-212000 dockerd[1120]: time="2024-03-11T20:15:33.677085428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 11 20:15:33 addons-212000 dockerd[1120]: time="2024-03-11T20:15:33.677288029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 11 20:15:33 addons-212000 dockerd[1120]: time="2024-03-11T20:15:33.677304322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 20:15:33 addons-212000 dockerd[1120]: time="2024-03-11T20:15:33.677395413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 20:15:33 addons-212000 cri-dockerd[1007]: time="2024-03-11T20:15:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/26d0aea0e34965b6c539490b2669eeb42729a46213864df3db9132983609d55d/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 11 20:15:33 addons-212000 dockerd[1120]: time="2024-03-11T20:15:33.774442881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 11 20:15:33 addons-212000 dockerd[1120]: time="2024-03-11T20:15:33.774469966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 11 20:15:33 addons-212000 dockerd[1120]: time="2024-03-11T20:15:33.774483634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 20:15:33 addons-212000 dockerd[1120]: time="2024-03-11T20:15:33.774510178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 20:15:33 addons-212000 dockerd[1114]: time="2024-03-11T20:15:33.804633531Z" level=info msg="ignoring event" container=a07808c3b0201428b4c9072ce2998932aaa2dd2c190fc821442dc46a9e9bca4c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 11 20:15:33 addons-212000 dockerd[1120]: time="2024-03-11T20:15:33.804846049Z" level=info msg="shim disconnected" id=a07808c3b0201428b4c9072ce2998932aaa2dd2c190fc821442dc46a9e9bca4c namespace=moby
	Mar 11 20:15:33 addons-212000 dockerd[1120]: time="2024-03-11T20:15:33.804946474Z" level=warning msg="cleaning up after shim disconnected" id=a07808c3b0201428b4c9072ce2998932aaa2dd2c190fc821442dc46a9e9bca4c namespace=moby
	Mar 11 20:15:33 addons-212000 dockerd[1120]: time="2024-03-11T20:15:33.804955850Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 11 20:15:35 addons-212000 dockerd[1114]: time="2024-03-11T20:15:35.628719832Z" level=info msg="ignoring event" container=26d0aea0e34965b6c539490b2669eeb42729a46213864df3db9132983609d55d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 11 20:15:35 addons-212000 dockerd[1120]: time="2024-03-11T20:15:35.628925432Z" level=info msg="shim disconnected" id=26d0aea0e34965b6c539490b2669eeb42729a46213864df3db9132983609d55d namespace=moby
	Mar 11 20:15:35 addons-212000 dockerd[1120]: time="2024-03-11T20:15:35.629330215Z" level=warning msg="cleaning up after shim disconnected" id=26d0aea0e34965b6c539490b2669eeb42729a46213864df3db9132983609d55d namespace=moby
	Mar 11 20:15:35 addons-212000 dockerd[1120]: time="2024-03-11T20:15:35.629340549Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                       ATTEMPT             POD ID              POD
	a07808c3b0201       fc9db2894f4e4                                                                                                                3 seconds ago        Exited              helper-pod                 0                   26d0aea0e3496       helper-pod-delete-pvc-96ca7c81-3edb-43c3-9d40-7db83042191a
	befcba0f46e1f       busybox@sha256:650fd573e056b679a5110a70aabeb01e26b76e545ec4b9c70a9523f2dfaf18c6                                              5 seconds ago        Exited              busybox                    0                   ae14f3a742540       test-local-path
	fbffd5d18eab9       dd1b12fcb6097                                                                                                                12 seconds ago       Exited              hello-world-app            1                   a0339881c84b9       hello-world-app-5d77478584-wbp8r
	6bdef9ae1f8f5       nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9                                                30 seconds ago       Running             nginx                      0                   9fbbc9bae4f96       nginx
	e4123230b28db       gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b                          51 seconds ago       Exited              registry-test              0                   bc1b9f2e36239       registry-test
	91950736b14b8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32                 About a minute ago   Running             gcp-auth                   0                   ac79b1dcd395a       gcp-auth-5f6b4f85fd-46gqm
	22f86d9f04bfe       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       2 minutes ago        Running             local-path-provisioner     0                   0d11210205237       local-path-provisioner-78b46b4d5c-rtq4z
	de1419295e56f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334   2 minutes ago        Exited              create                     0                   753d9fb3e47f0       ingress-nginx-admission-create-65bfl
	441aaa50214ee       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334   2 minutes ago        Exited              patch                      0                   feffb802b27bb       ingress-nginx-admission-patch-qxln7
	1d01cd17d652c       gcr.io/cloud-spanner-emulator/emulator@sha256:41d5dccfcf13817a2348beba0ca7c650ffdd795f7fcbe975b7822c9eed262e15               3 minutes ago        Running             cloud-spanner-emulator     0                   369f1be8a8171       cloud-spanner-emulator-6548d5df46-84hrc
	5bec9d0410725       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                        3 minutes ago        Running             yakd                       0                   0d1a7d53349ef       yakd-dashboard-9947fc6bf-k84vk
	d3f86cde998a3       nvcr.io/nvidia/k8s-device-plugin@sha256:50aa9517d771e3b0ffa7fded8f1e988dba680a7ff5efce162ce31d1b5ec043e2                     3 minutes ago        Running             nvidia-device-plugin-ctr   0                   63f0fb040e0ce       nvidia-device-plugin-daemonset-jr55d
	bbd1b05f1c2a2       ba04bb24b9575                                                                                                                3 minutes ago        Running             storage-provisioner        0                   23c78636abaa9       storage-provisioner
	1f23aef88fa10       97e04611ad434                                                                                                                3 minutes ago        Running             coredns                    0                   c67b824918002       coredns-5dd5756b68-dkvfp
	1b6860c49eb2e       3ca3ca488cf13                                                                                                                3 minutes ago        Running             kube-proxy                 0                   5a08490c54a21       kube-proxy-8wlj2
	4ac15ba960d5c       04b4c447bb9d4                                                                                                                4 minutes ago        Running             kube-apiserver             0                   1890afbca09f2       kube-apiserver-addons-212000
	600a02318ba2a       05c284c929889                                                                                                                4 minutes ago        Running             kube-scheduler             0                   73d04b5599260       kube-scheduler-addons-212000
	4e364d51db379       9cdd6470f48c8                                                                                                                4 minutes ago        Running             etcd                       0                   4fdef34c13f98       etcd-addons-212000
	9f4ea0a0316e0       9961cbceaf234                                                                                                                4 minutes ago        Running             kube-controller-manager    0                   ef914d5814480       kube-controller-manager-addons-212000
	
	
	==> coredns [1f23aef88fa1] <==
	[INFO] 10.244.0.20:51148 - 17775 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000028586s
	[INFO] 10.244.0.20:50535 - 54547 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000026252s
	[INFO] 10.244.0.20:51148 - 5155 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000031252s
	[INFO] 10.244.0.20:50535 - 43612 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000024669s
	[INFO] 10.244.0.20:51148 - 42549 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000025461s
	[INFO] 10.244.0.20:50535 - 56635 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000024627s
	[INFO] 10.244.0.20:51148 - 44923 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000010542s
	[INFO] 10.244.0.20:50535 - 32871 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00002396s
	[INFO] 10.244.0.20:51148 - 8302 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000010459s
	[INFO] 10.244.0.20:50535 - 35347 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000046046s
	[INFO] 10.244.0.20:51148 - 57517 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000025878s
	[INFO] 10.244.0.20:46397 - 473 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000042879s
	[INFO] 10.244.0.20:46397 - 43335 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00003617s
	[INFO] 10.244.0.20:34951 - 53981 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000181975s
	[INFO] 10.244.0.20:46397 - 1554 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00001371s
	[INFO] 10.244.0.20:34951 - 24906 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000016877s
	[INFO] 10.244.0.20:46397 - 25815 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011001s
	[INFO] 10.244.0.20:34951 - 13189 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00003842s
	[INFO] 10.244.0.20:46397 - 14114 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000015835s
	[INFO] 10.244.0.20:34951 - 5619 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011084s
	[INFO] 10.244.0.20:34951 - 35180 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000013501s
	[INFO] 10.244.0.20:46397 - 40462 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00001496s
	[INFO] 10.244.0.20:34951 - 1615 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000014918s
	[INFO] 10.244.0.20:46397 - 25576 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00002371s
	[INFO] 10.244.0.20:34951 - 47040 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000011501s
	
	
	==> describe nodes <==
	Name:               addons-212000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-212000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=addons-212000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T13_11_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-212000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 20:11:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-212000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 20:15:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 20:15:14 +0000   Mon, 11 Mar 2024 20:11:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 20:15:14 +0000   Mon, 11 Mar 2024 20:11:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 20:15:14 +0000   Mon, 11 Mar 2024 20:11:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 20:15:14 +0000   Mon, 11 Mar 2024 20:11:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-212000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	System Info:
	  Machine ID:                 e2009a6d2764402389567a0c8318d6f3
	  System UUID:                e2009a6d2764402389567a0c8318d6f3
	  Boot ID:                    d746a18b-6355-4548-a935-26531dbdec22
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6548d5df46-84hrc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  default                     hello-world-app-5d77478584-wbp8r           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  gcp-auth                    gcp-auth-5f6b4f85fd-46gqm                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 coredns-5dd5756b68-dkvfp                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m43s
	  kube-system                 etcd-addons-212000                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-apiserver-addons-212000               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-controller-manager-addons-212000      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-proxy-8wlj2                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 kube-scheduler-addons-212000               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 nvidia-device-plugin-daemonset-jr55d       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  local-path-storage          local-path-provisioner-78b46b4d5c-rtq4z    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-k84vk             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m41s                kube-proxy       
	  Normal  Starting                 4m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m1s (x8 over 4m1s)  kubelet          Node addons-212000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s (x8 over 4m1s)  kubelet          Node addons-212000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s (x7 over 4m1s)  kubelet          Node addons-212000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m56s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m56s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m56s                kubelet          Node addons-212000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s                kubelet          Node addons-212000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s                kubelet          Node addons-212000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m55s                kubelet          Node addons-212000 status is now: NodeReady
	  Normal  RegisteredNode           3m44s                node-controller  Node addons-212000 event: Registered Node addons-212000 in Controller
	
	
	==> dmesg <==
	[  +5.111160] systemd-fstab-generator[2434]: Ignoring "noauto" option for root device
	[  +0.045476] kauditd_printk_skb: 64 callbacks suppressed
	[ +13.460494] systemd-fstab-generator[3042]: Ignoring "noauto" option for root device
	[  +0.047947] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.043239] kauditd_printk_skb: 232 callbacks suppressed
	[Mar11 20:12] kauditd_printk_skb: 51 callbacks suppressed
	[ +24.176298] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.835716] kauditd_printk_skb: 10 callbacks suppressed
	[ +15.503555] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.038078] kauditd_printk_skb: 9 callbacks suppressed
	[Mar11 20:13] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.721353] kauditd_printk_skb: 13 callbacks suppressed
	[ +16.662168] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.986405] kauditd_printk_skb: 4 callbacks suppressed
	[  +8.369919] kauditd_printk_skb: 20 callbacks suppressed
	[Mar11 20:14] kauditd_printk_skb: 4 callbacks suppressed
	[ +11.785542] kauditd_printk_skb: 2 callbacks suppressed
	[ +27.055819] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.143679] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.154541] kauditd_printk_skb: 14 callbacks suppressed
	[Mar11 20:15] kauditd_printk_skb: 3 callbacks suppressed
	[  +9.217992] kauditd_printk_skb: 26 callbacks suppressed
	[  +8.258692] kauditd_printk_skb: 39 callbacks suppressed
	[  +6.548154] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.068789] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [4e364d51db37] <==
	{"level":"info","ts":"2024-03-11T20:11:36.773811Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T20:11:36.773925Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T20:11:36.773958Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T20:11:36.773995Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-11T20:11:36.774012Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-11T20:11:36.774036Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T20:11:36.775268Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-11T20:11:36.775348Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-03-11T20:12:13.98385Z","caller":"traceutil/trace.go:171","msg":"trace[1195964038] linearizableReadLoop","detail":"{readStateIndex:905; appliedIndex:904; }","duration":"136.548751ms","start":"2024-03-11T20:12:13.847289Z","end":"2024-03-11T20:12:13.983837Z","steps":["trace[1195964038] 'read index received'  (duration: 133.625818ms)","trace[1195964038] 'applied index is now lower than readState.Index'  (duration: 2.921724ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-11T20:12:13.983972Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.683685ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/coredns-5dd5756b68-dkvfp.17bbced9195f5fe0\" ","response":"range_response_count:1 size:787"}
	{"level":"warn","ts":"2024-03-11T20:12:13.983982Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.849358ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13435"}
	{"level":"info","ts":"2024-03-11T20:12:13.983991Z","caller":"traceutil/trace.go:171","msg":"trace[1115383618] range","detail":"{range_begin:/registry/events/kube-system/coredns-5dd5756b68-dkvfp.17bbced9195f5fe0; range_end:; response_count:1; response_revision:883; }","duration":"136.714189ms","start":"2024-03-11T20:12:13.847273Z","end":"2024-03-11T20:12:13.983987Z","steps":["trace[1115383618] 'agreement among raft nodes before linearized reading'  (duration: 136.658182ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T20:12:13.983998Z","caller":"traceutil/trace.go:171","msg":"trace[999438683] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:883; }","duration":"110.867401ms","start":"2024-03-11T20:12:13.873126Z","end":"2024-03-11T20:12:13.983994Z","steps":["trace[999438683] 'agreement among raft nodes before linearized reading'  (duration: 110.828646ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T20:12:13.984284Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.153482ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:78148"}
	{"level":"info","ts":"2024-03-11T20:12:13.984296Z","caller":"traceutil/trace.go:171","msg":"trace[1539746857] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:883; }","duration":"111.166317ms","start":"2024-03-11T20:12:13.873126Z","end":"2024-03-11T20:12:13.984292Z","steps":["trace[1539746857] 'agreement among raft nodes before linearized reading'  (duration: 111.093932ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T20:12:27.983912Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.312657ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13435"}
	{"level":"warn","ts":"2024-03-11T20:12:27.983945Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.432131ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:78571"}
	{"level":"info","ts":"2024-03-11T20:12:27.984045Z","caller":"traceutil/trace.go:171","msg":"trace[836823920] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:913; }","duration":"109.451092ms","start":"2024-03-11T20:12:27.874507Z","end":"2024-03-11T20:12:27.983958Z","steps":["trace[836823920] 'range keys from in-memory index tree'  (duration: 109.310157ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T20:12:27.984354Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.453001ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-dkvfp\" ","response":"range_response_count:1 size:4755"}
	{"level":"info","ts":"2024-03-11T20:12:27.984364Z","caller":"traceutil/trace.go:171","msg":"trace[1155070316] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-dkvfp; range_end:; response_count:1; response_revision:913; }","duration":"163.464211ms","start":"2024-03-11T20:12:27.820897Z","end":"2024-03-11T20:12:27.984361Z","steps":["trace[1155070316] 'range keys from in-memory index tree'  (duration: 163.416246ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-11T20:12:27.983949Z","caller":"traceutil/trace.go:171","msg":"trace[1469337578] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:913; }","duration":"109.351371ms","start":"2024-03-11T20:12:27.874586Z","end":"2024-03-11T20:12:27.983938Z","steps":["trace[1469337578] 'range keys from in-memory index tree'  (duration: 109.281861ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T20:12:44.295597Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"287.08629ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:78585"}
	{"level":"info","ts":"2024-03-11T20:12:44.29582Z","caller":"traceutil/trace.go:171","msg":"trace[248373190] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:968; }","duration":"287.31482ms","start":"2024-03-11T20:12:44.008498Z","end":"2024-03-11T20:12:44.295812Z","steps":["trace[248373190] 'range keys from in-memory index tree'  (duration: 286.983944ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-11T20:13:24.622724Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.368438ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:79132"}
	{"level":"info","ts":"2024-03-11T20:13:24.622758Z","caller":"traceutil/trace.go:171","msg":"trace[2019521565] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:1123; }","duration":"109.409192ms","start":"2024-03-11T20:13:24.513343Z","end":"2024-03-11T20:13:24.622752Z","steps":["trace[2019521565] 'range keys from in-memory index tree'  (duration: 109.231672ms)"],"step_count":1}
	
	
	==> gcp-auth [91950736b14b] <==
	2024/03/11 20:14:28 GCP Auth Webhook started!
	2024/03/11 20:14:39 Ready to marshal response ...
	2024/03/11 20:14:39 Ready to write response ...
	2024/03/11 20:14:39 Ready to marshal response ...
	2024/03/11 20:14:39 Ready to write response ...
	2024/03/11 20:15:03 Ready to marshal response ...
	2024/03/11 20:15:03 Ready to write response ...
	2024/03/11 20:15:05 Ready to marshal response ...
	2024/03/11 20:15:05 Ready to write response ...
	2024/03/11 20:15:12 Ready to marshal response ...
	2024/03/11 20:15:12 Ready to write response ...
	2024/03/11 20:15:21 Ready to marshal response ...
	2024/03/11 20:15:21 Ready to write response ...
	2024/03/11 20:15:21 Ready to marshal response ...
	2024/03/11 20:15:21 Ready to write response ...
	2024/03/11 20:15:33 Ready to marshal response ...
	2024/03/11 20:15:33 Ready to write response ...
	
	
	==> kernel <==
	 20:15:36 up 4 min,  0 users,  load average: 1.09, 0.76, 0.34
	Linux addons-212000 5.10.207 #1 SMP PREEMPT Thu Feb 22 23:40:42 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4ac15ba960d5] <==
	I0311 20:15:03.319026       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0311 20:15:03.423379       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.96.15"}
	E0311 20:15:07.460137       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","global-default","catch-all","exempt","system","node-high","leader-election"] items=[{},{},{},{},{},{},{},{}]
	I0311 20:15:12.640572       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.68.63"}
	E0311 20:15:17.461129       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","global-default","catch-all","exempt","system","node-high","leader-election"] items=[{},{},{},{},{},{},{},{}]
	I0311 20:15:20.827217       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 20:15:20.827232       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 20:15:20.837995       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 20:15:20.838260       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 20:15:20.838568       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 20:15:20.838733       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 20:15:20.844713       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 20:15:20.845298       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 20:15:20.851862       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 20:15:20.851924       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 20:15:20.852094       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 20:15:20.852129       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 20:15:20.859261       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 20:15:20.859285       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0311 20:15:20.862689       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0311 20:15:20.863087       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0311 20:15:21.845376       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0311 20:15:21.863414       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0311 20:15:21.868130       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0311 20:15:27.461235       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","workload-high","workload-low","global-default","catch-all","exempt","system","node-high"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [9f4ea0a0316e] <==
	I0311 20:15:23.174252       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0311 20:15:23.174316       1 shared_informer.go:318] Caches are synced for garbage collector
	W0311 20:15:23.325452       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 20:15:23.325471       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0311 20:15:24.425575       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="29.544µs"
	W0311 20:15:24.917415       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 20:15:24.917456       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0311 20:15:25.435484       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="56.213µs"
	W0311 20:15:25.537017       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 20:15:25.537034       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0311 20:15:25.953133       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 20:15:25.953248       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0311 20:15:26.447348       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="39.961µs"
	I0311 20:15:28.653516       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0311 20:15:28.655280       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="2.625µs"
	I0311 20:15:28.656936       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W0311 20:15:29.204220       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 20:15:29.204238       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0311 20:15:29.430912       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 20:15:29.430931       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0311 20:15:31.079437       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 20:15:31.079456       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0311 20:15:32.076598       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 20:15:32.076614       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0311 20:15:33.575207       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="2.875µs"
	
	
	==> kube-proxy [1b6860c49eb2] <==
	I0311 20:11:54.403238       1 server_others.go:69] "Using iptables proxy"
	I0311 20:11:54.411050       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0311 20:11:54.464471       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0311 20:11:54.464484       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 20:11:54.469019       1 server_others.go:152] "Using iptables Proxier"
	I0311 20:11:54.469045       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 20:11:54.469138       1 server.go:846] "Version info" version="v1.28.4"
	I0311 20:11:54.469144       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 20:11:54.469782       1 config.go:188] "Starting service config controller"
	I0311 20:11:54.469805       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 20:11:54.469820       1 config.go:97] "Starting endpoint slice config controller"
	I0311 20:11:54.469823       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 20:11:54.470734       1 config.go:315] "Starting node config controller"
	I0311 20:11:54.470738       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 20:11:54.570394       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0311 20:11:54.570420       1 shared_informer.go:318] Caches are synced for service config
	I0311 20:11:54.571101       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [600a02318ba2] <==
	W0311 20:11:37.420511       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0311 20:11:37.420515       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0311 20:11:37.420526       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0311 20:11:37.420528       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0311 20:11:37.420539       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0311 20:11:37.420541       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0311 20:11:37.420554       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0311 20:11:37.420556       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0311 20:11:37.420566       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0311 20:11:37.420569       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0311 20:11:37.420599       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0311 20:11:37.420603       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0311 20:11:37.420614       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0311 20:11:37.420617       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0311 20:11:37.420630       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0311 20:11:37.420642       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0311 20:11:37.420661       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0311 20:11:37.420665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0311 20:11:37.420704       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0311 20:11:37.420711       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0311 20:11:37.421106       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0311 20:11:37.421150       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0311 20:11:38.389083       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0311 20:11:38.389103       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0311 20:11:38.718589       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 11 20:15:32 addons-212000 kubelet[2441]: I0311 20:15:32.815111    2441 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a05de413-867f-426f-a796-07e67403cf09-gcp-creds\") on node \"addons-212000\" DevicePath \"\""
	Mar 11 20:15:33 addons-212000 kubelet[2441]: I0311 20:15:33.331645    2441 topology_manager.go:215] "Topology Admit Handler" podUID="3b753d2b-a0b2-4fe1-9917-1c07afab39ee" podNamespace="local-path-storage" podName="helper-pod-delete-pvc-96ca7c81-3edb-43c3-9d40-7db83042191a"
	Mar 11 20:15:33 addons-212000 kubelet[2441]: E0311 20:15:33.331686    2441 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6796c139-3d20-4c7b-9304-6e578f4598cf" containerName="minikube-ingress-dns"
	Mar 11 20:15:33 addons-212000 kubelet[2441]: E0311 20:15:33.331692    2441 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a05de413-867f-426f-a796-07e67403cf09" containerName="busybox"
	Mar 11 20:15:33 addons-212000 kubelet[2441]: I0311 20:15:33.331708    2441 memory_manager.go:346] "RemoveStaleState removing state" podUID="a05de413-867f-426f-a796-07e67403cf09" containerName="busybox"
	Mar 11 20:15:33 addons-212000 kubelet[2441]: I0311 20:15:33.331712    2441 memory_manager.go:346] "RemoveStaleState removing state" podUID="6796c139-3d20-4c7b-9304-6e578f4598cf" containerName="minikube-ingress-dns"
	Mar 11 20:15:33 addons-212000 kubelet[2441]: I0311 20:15:33.331716    2441 memory_manager.go:346] "RemoveStaleState removing state" podUID="6796c139-3d20-4c7b-9304-6e578f4598cf" containerName="minikube-ingress-dns"
	Mar 11 20:15:33 addons-212000 kubelet[2441]: I0311 20:15:33.420146    2441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/3b753d2b-a0b2-4fe1-9917-1c07afab39ee-data\") pod \"helper-pod-delete-pvc-96ca7c81-3edb-43c3-9d40-7db83042191a\" (UID: \"3b753d2b-a0b2-4fe1-9917-1c07afab39ee\") " pod="local-path-storage/helper-pod-delete-pvc-96ca7c81-3edb-43c3-9d40-7db83042191a"
	Mar 11 20:15:33 addons-212000 kubelet[2441]: I0311 20:15:33.420183    2441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/3b753d2b-a0b2-4fe1-9917-1c07afab39ee-script\") pod \"helper-pod-delete-pvc-96ca7c81-3edb-43c3-9d40-7db83042191a\" (UID: \"3b753d2b-a0b2-4fe1-9917-1c07afab39ee\") " pod="local-path-storage/helper-pod-delete-pvc-96ca7c81-3edb-43c3-9d40-7db83042191a"
	Mar 11 20:15:33 addons-212000 kubelet[2441]: I0311 20:15:33.420197    2441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmvtg\" (UniqueName: \"kubernetes.io/projected/3b753d2b-a0b2-4fe1-9917-1c07afab39ee-kube-api-access-zmvtg\") pod \"helper-pod-delete-pvc-96ca7c81-3edb-43c3-9d40-7db83042191a\" (UID: \"3b753d2b-a0b2-4fe1-9917-1c07afab39ee\") " pod="local-path-storage/helper-pod-delete-pvc-96ca7c81-3edb-43c3-9d40-7db83042191a"
	Mar 11 20:15:33 addons-212000 kubelet[2441]: I0311 20:15:33.420208    2441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3b753d2b-a0b2-4fe1-9917-1c07afab39ee-gcp-creds\") pod \"helper-pod-delete-pvc-96ca7c81-3edb-43c3-9d40-7db83042191a\" (UID: \"3b753d2b-a0b2-4fe1-9917-1c07afab39ee\") " pod="local-path-storage/helper-pod-delete-pvc-96ca7c81-3edb-43c3-9d40-7db83042191a"
	Mar 11 20:15:33 addons-212000 kubelet[2441]: I0311 20:15:33.529141    2441 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae14f3a742540a994450e5feb49f584873de6af55363e79e4a32b8d5a8843e7a"
	Mar 11 20:15:34 addons-212000 kubelet[2441]: I0311 20:15:34.144150    2441 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a05de413-867f-426f-a796-07e67403cf09" path="/var/lib/kubelet/pods/a05de413-867f-426f-a796-07e67403cf09/volumes"
	Mar 11 20:15:35 addons-212000 kubelet[2441]: I0311 20:15:35.733181    2441 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3b753d2b-a0b2-4fe1-9917-1c07afab39ee-gcp-creds\") pod \"3b753d2b-a0b2-4fe1-9917-1c07afab39ee\" (UID: \"3b753d2b-a0b2-4fe1-9917-1c07afab39ee\") "
	Mar 11 20:15:35 addons-212000 kubelet[2441]: I0311 20:15:35.733210    2441 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/3b753d2b-a0b2-4fe1-9917-1c07afab39ee-data\") pod \"3b753d2b-a0b2-4fe1-9917-1c07afab39ee\" (UID: \"3b753d2b-a0b2-4fe1-9917-1c07afab39ee\") "
	Mar 11 20:15:35 addons-212000 kubelet[2441]: I0311 20:15:35.733229    2441 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmvtg\" (UniqueName: \"kubernetes.io/projected/3b753d2b-a0b2-4fe1-9917-1c07afab39ee-kube-api-access-zmvtg\") pod \"3b753d2b-a0b2-4fe1-9917-1c07afab39ee\" (UID: \"3b753d2b-a0b2-4fe1-9917-1c07afab39ee\") "
	Mar 11 20:15:35 addons-212000 kubelet[2441]: I0311 20:15:35.733255    2441 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/3b753d2b-a0b2-4fe1-9917-1c07afab39ee-script\") pod \"3b753d2b-a0b2-4fe1-9917-1c07afab39ee\" (UID: \"3b753d2b-a0b2-4fe1-9917-1c07afab39ee\") "
	Mar 11 20:15:35 addons-212000 kubelet[2441]: I0311 20:15:35.733427    2441 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b753d2b-a0b2-4fe1-9917-1c07afab39ee-data" (OuterVolumeSpecName: "data") pod "3b753d2b-a0b2-4fe1-9917-1c07afab39ee" (UID: "3b753d2b-a0b2-4fe1-9917-1c07afab39ee"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 11 20:15:35 addons-212000 kubelet[2441]: I0311 20:15:35.733448    2441 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b753d2b-a0b2-4fe1-9917-1c07afab39ee-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "3b753d2b-a0b2-4fe1-9917-1c07afab39ee" (UID: "3b753d2b-a0b2-4fe1-9917-1c07afab39ee"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 11 20:15:35 addons-212000 kubelet[2441]: I0311 20:15:35.733523    2441 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b753d2b-a0b2-4fe1-9917-1c07afab39ee-script" (OuterVolumeSpecName: "script") pod "3b753d2b-a0b2-4fe1-9917-1c07afab39ee" (UID: "3b753d2b-a0b2-4fe1-9917-1c07afab39ee"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Mar 11 20:15:35 addons-212000 kubelet[2441]: I0311 20:15:35.737218    2441 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b753d2b-a0b2-4fe1-9917-1c07afab39ee-kube-api-access-zmvtg" (OuterVolumeSpecName: "kube-api-access-zmvtg") pod "3b753d2b-a0b2-4fe1-9917-1c07afab39ee" (UID: "3b753d2b-a0b2-4fe1-9917-1c07afab39ee"). InnerVolumeSpecName "kube-api-access-zmvtg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 11 20:15:35 addons-212000 kubelet[2441]: I0311 20:15:35.834222    2441 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3b753d2b-a0b2-4fe1-9917-1c07afab39ee-gcp-creds\") on node \"addons-212000\" DevicePath \"\""
	Mar 11 20:15:35 addons-212000 kubelet[2441]: I0311 20:15:35.834245    2441 reconciler_common.go:300] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/3b753d2b-a0b2-4fe1-9917-1c07afab39ee-data\") on node \"addons-212000\" DevicePath \"\""
	Mar 11 20:15:35 addons-212000 kubelet[2441]: I0311 20:15:35.834254    2441 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zmvtg\" (UniqueName: \"kubernetes.io/projected/3b753d2b-a0b2-4fe1-9917-1c07afab39ee-kube-api-access-zmvtg\") on node \"addons-212000\" DevicePath \"\""
	Mar 11 20:15:35 addons-212000 kubelet[2441]: I0311 20:15:35.834259    2441 reconciler_common.go:300] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/3b753d2b-a0b2-4fe1-9917-1c07afab39ee-script\") on node \"addons-212000\" DevicePath \"\""
	
	
	==> storage-provisioner [bbd1b05f1c2a] <==
	I0311 20:11:56.221076       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0311 20:11:56.231244       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0311 20:11:56.231266       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0311 20:11:56.237430       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0311 20:11:56.237511       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-212000_3835f0d2-ab78-4ab4-a757-b0b3c8d6f6c2!
	I0311 20:11:56.240956       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1706d561-1f3c-4586-8507-fb2a392391a2", APIVersion:"v1", ResourceVersion:"514", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-212000_3835f0d2-ab78-4ab4-a757-b0b3c8d6f6c2 became leader
	I0311 20:11:56.341181       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-212000_3835f0d2-ab78-4ab4-a757-b0b3c8d6f6c2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-212000 -n addons-212000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-212000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: helper-pod-delete-pvc-96ca7c81-3edb-43c3-9d40-7db83042191a
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-212000 describe pod helper-pod-delete-pvc-96ca7c81-3edb-43c3-9d40-7db83042191a
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-212000 describe pod helper-pod-delete-pvc-96ca7c81-3edb-43c3-9d40-7db83042191a: exit status 1 (40.515667ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "helper-pod-delete-pvc-96ca7c81-3edb-43c3-9d40-7db83042191a" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-212000 describe pod helper-pod-delete-pvc-96ca7c81-3edb-43c3-9d40-7db83042191a: exit status 1
--- FAIL: TestAddons/parallel/Ingress (33.42s)

                                                
                                    
x
+
TestCertOptions (10.25s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-985000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-985000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.951242709s)

                                                
                                                
-- stdout --
	* [cert-options-985000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-985000" primary control-plane node in "cert-options-985000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-985000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-985000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-985000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-985000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-985000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (83.150833ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-985000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-985000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-985000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-985000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-985000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-985000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (42.009291ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-985000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-985000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-985000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-985000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-985000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-03-11 14:03:45.672782 -0700 PDT m=+3249.018843084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-985000 -n cert-options-985000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-985000 -n cert-options-985000: exit status 7 (32.916167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-985000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-985000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-985000
--- FAIL: TestCertOptions (10.25s)

                                                
                                    
x
+
TestCertExpiration (197.05s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-969000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-969000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (11.609633791s)

                                                
                                                
-- stdout --
	* [cert-expiration-969000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-969000" primary control-plane node in "cert-expiration-969000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-969000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-969000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-969000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-969000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-969000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.256368709s)

                                                
                                                
-- stdout --
	* [cert-expiration-969000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-969000" primary control-plane node in "cert-expiration-969000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-969000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-969000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-969000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-969000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-969000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-969000" primary control-plane node in "cert-expiration-969000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-969000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-969000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-969000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-03-11 14:06:35.536856 -0700 PDT m=+3418.867045709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-969000 -n cert-expiration-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-969000 -n cert-expiration-969000: exit status 7 (68.473417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-969000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-969000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-969000
--- FAIL: TestCertExpiration (197.05s)

                                                
                                    
x
+
TestDockerFlags (10.17s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-840000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-840000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.904240833s)

                                                
                                                
-- stdout --
	* [docker-flags-840000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-840000" primary control-plane node in "docker-flags-840000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-840000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:03:25.421929    4594 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:03:25.422044    4594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:03:25.422047    4594 out.go:304] Setting ErrFile to fd 2...
	I0311 14:03:25.422049    4594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:03:25.422159    4594 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:03:25.423219    4594 out.go:298] Setting JSON to false
	I0311 14:03:25.439369    4594 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3776,"bootTime":1710187229,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 14:03:25.439426    4594 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 14:03:25.445780    4594 out.go:177] * [docker-flags-840000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 14:03:25.451720    4594 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 14:03:25.455725    4594 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 14:03:25.451730    4594 notify.go:220] Checking for updates...
	I0311 14:03:25.461675    4594 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 14:03:25.464772    4594 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 14:03:25.467741    4594 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 14:03:25.469227    4594 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 14:03:25.472995    4594 config.go:182] Loaded profile config "cert-expiration-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:03:25.473069    4594 config.go:182] Loaded profile config "multinode-457000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:03:25.473117    4594 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 14:03:25.477727    4594 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 14:03:25.483690    4594 start.go:297] selected driver: qemu2
	I0311 14:03:25.483696    4594 start.go:901] validating driver "qemu2" against <nil>
	I0311 14:03:25.483702    4594 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 14:03:25.485988    4594 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 14:03:25.489709    4594 out.go:177] * Automatically selected the socket_vmnet network
	I0311 14:03:25.492837    4594 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0311 14:03:25.492857    4594 cni.go:84] Creating CNI manager for ""
	I0311 14:03:25.492873    4594 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 14:03:25.492877    4594 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 14:03:25.492919    4594 start.go:340] cluster config:
	{Name:docker-flags-840000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-840000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:03:25.497463    4594 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:03:25.504739    4594 out.go:177] * Starting "docker-flags-840000" primary control-plane node in "docker-flags-840000" cluster
	I0311 14:03:25.508674    4594 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 14:03:25.508688    4594 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 14:03:25.508701    4594 cache.go:56] Caching tarball of preloaded images
	I0311 14:03:25.508754    4594 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 14:03:25.508761    4594 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 14:03:25.508862    4594 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/docker-flags-840000/config.json ...
	I0311 14:03:25.508884    4594 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/docker-flags-840000/config.json: {Name:mk2e935f970d9229686c3e4fb7686f82ee2b4c43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 14:03:25.509131    4594 start.go:360] acquireMachinesLock for docker-flags-840000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:03:25.509169    4594 start.go:364] duration metric: took 30.542µs to acquireMachinesLock for "docker-flags-840000"
	I0311 14:03:25.509183    4594 start.go:93] Provisioning new machine with config: &{Name:docker-flags-840000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-840000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:03:25.509234    4594 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:03:25.517655    4594 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0311 14:03:25.535503    4594 start.go:159] libmachine.API.Create for "docker-flags-840000" (driver="qemu2")
	I0311 14:03:25.535536    4594 client.go:168] LocalClient.Create starting
	I0311 14:03:25.535600    4594 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:03:25.535627    4594 main.go:141] libmachine: Decoding PEM data...
	I0311 14:03:25.535636    4594 main.go:141] libmachine: Parsing certificate...
	I0311 14:03:25.535683    4594 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:03:25.535705    4594 main.go:141] libmachine: Decoding PEM data...
	I0311 14:03:25.535712    4594 main.go:141] libmachine: Parsing certificate...
	I0311 14:03:25.536082    4594 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:03:25.668598    4594 main.go:141] libmachine: Creating SSH key...
	I0311 14:03:25.772499    4594 main.go:141] libmachine: Creating Disk image...
	I0311 14:03:25.772506    4594 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:03:25.772694    4594 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/docker-flags-840000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/docker-flags-840000/disk.qcow2
	I0311 14:03:25.784983    4594 main.go:141] libmachine: STDOUT: 
	I0311 14:03:25.785012    4594 main.go:141] libmachine: STDERR: 
	I0311 14:03:25.785062    4594 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/docker-flags-840000/disk.qcow2 +20000M
	I0311 14:03:25.796066    4594 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:03:25.796083    4594 main.go:141] libmachine: STDERR: 
	I0311 14:03:25.796100    4594 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/docker-flags-840000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/docker-flags-840000/disk.qcow2
	I0311 14:03:25.796105    4594 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:03:25.796138    4594 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/docker-flags-840000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/docker-flags-840000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/docker-flags-840000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:96:d0:8c:c9:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/docker-flags-840000/disk.qcow2
	I0311 14:03:25.797803    4594 main.go:141] libmachine: STDOUT: 
	I0311 14:03:25.797821    4594 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:03:25.797838    4594 client.go:171] duration metric: took 262.303042ms to LocalClient.Create
	I0311 14:03:27.800092    4594 start.go:128] duration metric: took 2.290879458s to createHost
	I0311 14:03:27.800197    4594 start.go:83] releasing machines lock for "docker-flags-840000", held for 2.29109125s
	W0311 14:03:27.800242    4594 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:03:27.815515    4594 out.go:177] * Deleting "docker-flags-840000" in qemu2 ...
	W0311 14:03:27.834289    4594 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:03:27.834314    4594 start.go:728] Will try again in 5 seconds ...
	I0311 14:03:32.836516    4594 start.go:360] acquireMachinesLock for docker-flags-840000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:03:32.837047    4594 start.go:364] duration metric: took 403.25µs to acquireMachinesLock for "docker-flags-840000"
	I0311 14:03:32.837192    4594 start.go:93] Provisioning new machine with config: &{Name:docker-flags-840000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-840000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:03:32.837434    4594 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:03:32.847010    4594 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0311 14:03:32.898124    4594 start.go:159] libmachine.API.Create for "docker-flags-840000" (driver="qemu2")
	I0311 14:03:32.898210    4594 client.go:168] LocalClient.Create starting
	I0311 14:03:32.898297    4594 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:03:32.898338    4594 main.go:141] libmachine: Decoding PEM data...
	I0311 14:03:32.898352    4594 main.go:141] libmachine: Parsing certificate...
	I0311 14:03:32.898409    4594 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:03:32.898437    4594 main.go:141] libmachine: Decoding PEM data...
	I0311 14:03:32.898447    4594 main.go:141] libmachine: Parsing certificate...
	I0311 14:03:32.898993    4594 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:03:33.045833    4594 main.go:141] libmachine: Creating SSH key...
	I0311 14:03:33.220198    4594 main.go:141] libmachine: Creating Disk image...
	I0311 14:03:33.220209    4594 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:03:33.220391    4594 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/docker-flags-840000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/docker-flags-840000/disk.qcow2
	I0311 14:03:33.232952    4594 main.go:141] libmachine: STDOUT: 
	I0311 14:03:33.232982    4594 main.go:141] libmachine: STDERR: 
	I0311 14:03:33.233042    4594 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/docker-flags-840000/disk.qcow2 +20000M
	I0311 14:03:33.243765    4594 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:03:33.243784    4594 main.go:141] libmachine: STDERR: 
	I0311 14:03:33.243798    4594 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/docker-flags-840000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/docker-flags-840000/disk.qcow2
	I0311 14:03:33.243804    4594 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:03:33.243845    4594 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/docker-flags-840000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/docker-flags-840000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/docker-flags-840000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:c1:a3:32:0d:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/docker-flags-840000/disk.qcow2
	I0311 14:03:33.245524    4594 main.go:141] libmachine: STDOUT: 
	I0311 14:03:33.245540    4594 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:03:33.245556    4594 client.go:171] duration metric: took 347.350792ms to LocalClient.Create
	I0311 14:03:35.247674    4594 start.go:128] duration metric: took 2.410287375s to createHost
	I0311 14:03:35.247745    4594 start.go:83] releasing machines lock for "docker-flags-840000", held for 2.410749333s
	W0311 14:03:35.248145    4594 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-840000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-840000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:03:35.262785    4594 out.go:177] 
	W0311 14:03:35.266945    4594 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:03:35.266975    4594 out.go:239] * 
	* 
	W0311 14:03:35.269495    4594 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:03:35.281759    4594 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-840000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-840000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-840000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.356833ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-840000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-840000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-840000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-840000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-840000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-840000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-840000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-840000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-840000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (46.812583ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-840000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-840000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-840000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-840000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-840000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-840000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-03-11 14:03:35.424026 -0700 PDT m=+3238.769757709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-840000 -n docker-flags-840000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-840000 -n docker-flags-840000: exit status 7 (32.08625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-840000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-840000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-840000
--- FAIL: TestDockerFlags (10.17s)

                                                
                                    
x
+
TestForceSystemdFlag (10.09s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-517000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-517000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.875733875s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-517000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-517000" primary control-plane node in "force-systemd-flag-517000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-517000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:02:54.919216    4450 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:02:54.919352    4450 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:02:54.919355    4450 out.go:304] Setting ErrFile to fd 2...
	I0311 14:02:54.919358    4450 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:02:54.919479    4450 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:02:54.920522    4450 out.go:298] Setting JSON to false
	I0311 14:02:54.936374    4450 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3745,"bootTime":1710187229,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 14:02:54.936460    4450 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 14:02:54.941463    4450 out.go:177] * [force-systemd-flag-517000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 14:02:54.948378    4450 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 14:02:54.952389    4450 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 14:02:54.948413    4450 notify.go:220] Checking for updates...
	I0311 14:02:54.958356    4450 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 14:02:54.961395    4450 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 14:02:54.964356    4450 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 14:02:54.967362    4450 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 14:02:54.970786    4450 config.go:182] Loaded profile config "NoKubernetes-371000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0311 14:02:54.970856    4450 config.go:182] Loaded profile config "multinode-457000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:02:54.970918    4450 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 14:02:54.975339    4450 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 14:02:54.982368    4450 start.go:297] selected driver: qemu2
	I0311 14:02:54.982373    4450 start.go:901] validating driver "qemu2" against <nil>
	I0311 14:02:54.982379    4450 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 14:02:54.984679    4450 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 14:02:54.988398    4450 out.go:177] * Automatically selected the socket_vmnet network
	I0311 14:02:54.992474    4450 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 14:02:54.992520    4450 cni.go:84] Creating CNI manager for ""
	I0311 14:02:54.992527    4450 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 14:02:54.992533    4450 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 14:02:54.992559    4450 start.go:340] cluster config:
	{Name:force-systemd-flag-517000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-517000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:02:54.996996    4450 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:02:55.004365    4450 out.go:177] * Starting "force-systemd-flag-517000" primary control-plane node in "force-systemd-flag-517000" cluster
	I0311 14:02:55.008374    4450 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 14:02:55.008389    4450 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 14:02:55.008403    4450 cache.go:56] Caching tarball of preloaded images
	I0311 14:02:55.008474    4450 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 14:02:55.008481    4450 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 14:02:55.008583    4450 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/force-systemd-flag-517000/config.json ...
	I0311 14:02:55.008596    4450 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/force-systemd-flag-517000/config.json: {Name:mk8b48e8ff0fbcf5db882af26f9e7cc4a5bafd09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 14:02:55.008826    4450 start.go:360] acquireMachinesLock for force-systemd-flag-517000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:02:55.008866    4450 start.go:364] duration metric: took 27.25µs to acquireMachinesLock for "force-systemd-flag-517000"
	I0311 14:02:55.008881    4450 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-517000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-517000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:02:55.008916    4450 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:02:55.017383    4450 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0311 14:02:55.035757    4450 start.go:159] libmachine.API.Create for "force-systemd-flag-517000" (driver="qemu2")
	I0311 14:02:55.035783    4450 client.go:168] LocalClient.Create starting
	I0311 14:02:55.035846    4450 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:02:55.035878    4450 main.go:141] libmachine: Decoding PEM data...
	I0311 14:02:55.035894    4450 main.go:141] libmachine: Parsing certificate...
	I0311 14:02:55.035938    4450 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:02:55.035960    4450 main.go:141] libmachine: Decoding PEM data...
	I0311 14:02:55.035980    4450 main.go:141] libmachine: Parsing certificate...
	I0311 14:02:55.036355    4450 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:02:55.283050    4450 main.go:141] libmachine: Creating SSH key...
	I0311 14:02:55.342060    4450 main.go:141] libmachine: Creating Disk image...
	I0311 14:02:55.342074    4450 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:02:55.342245    4450 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-flag-517000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-flag-517000/disk.qcow2
	I0311 14:02:55.354322    4450 main.go:141] libmachine: STDOUT: 
	I0311 14:02:55.354345    4450 main.go:141] libmachine: STDERR: 
	I0311 14:02:55.354401    4450 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-flag-517000/disk.qcow2 +20000M
	I0311 14:02:55.365240    4450 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:02:55.365258    4450 main.go:141] libmachine: STDERR: 
	I0311 14:02:55.365290    4450 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-flag-517000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-flag-517000/disk.qcow2
	I0311 14:02:55.365296    4450 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:02:55.365326    4450 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-flag-517000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-flag-517000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-flag-517000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:cf:9b:03:04:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-flag-517000/disk.qcow2
	I0311 14:02:55.367099    4450 main.go:141] libmachine: STDOUT: 
	I0311 14:02:55.367115    4450 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:02:55.367134    4450 client.go:171] duration metric: took 331.355583ms to LocalClient.Create
	I0311 14:02:57.368234    4450 start.go:128] duration metric: took 2.359371875s to createHost
	I0311 14:02:57.368300    4450 start.go:83] releasing machines lock for "force-systemd-flag-517000", held for 2.359498625s
	W0311 14:02:57.368396    4450 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:02:57.375610    4450 out.go:177] * Deleting "force-systemd-flag-517000" in qemu2 ...
	W0311 14:02:57.409063    4450 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:02:57.409109    4450 start.go:728] Will try again in 5 seconds ...
	I0311 14:03:02.411192    4450 start.go:360] acquireMachinesLock for force-systemd-flag-517000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:03:02.411552    4450 start.go:364] duration metric: took 284.833µs to acquireMachinesLock for "force-systemd-flag-517000"
	I0311 14:03:02.411680    4450 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-517000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-517000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:03:02.411984    4450 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:03:02.420528    4450 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0311 14:03:02.468526    4450 start.go:159] libmachine.API.Create for "force-systemd-flag-517000" (driver="qemu2")
	I0311 14:03:02.468585    4450 client.go:168] LocalClient.Create starting
	I0311 14:03:02.468717    4450 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:03:02.468785    4450 main.go:141] libmachine: Decoding PEM data...
	I0311 14:03:02.468800    4450 main.go:141] libmachine: Parsing certificate...
	I0311 14:03:02.468856    4450 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:03:02.468897    4450 main.go:141] libmachine: Decoding PEM data...
	I0311 14:03:02.468912    4450 main.go:141] libmachine: Parsing certificate...
	I0311 14:03:02.469540    4450 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:03:02.619520    4450 main.go:141] libmachine: Creating SSH key...
	I0311 14:03:02.681037    4450 main.go:141] libmachine: Creating Disk image...
	I0311 14:03:02.681042    4450 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:03:02.681200    4450 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-flag-517000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-flag-517000/disk.qcow2
	I0311 14:03:02.693548    4450 main.go:141] libmachine: STDOUT: 
	I0311 14:03:02.693574    4450 main.go:141] libmachine: STDERR: 
	I0311 14:03:02.693639    4450 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-flag-517000/disk.qcow2 +20000M
	I0311 14:03:02.704194    4450 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:03:02.704212    4450 main.go:141] libmachine: STDERR: 
	I0311 14:03:02.704225    4450 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-flag-517000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-flag-517000/disk.qcow2
	I0311 14:03:02.704229    4450 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:03:02.704263    4450 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-flag-517000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-flag-517000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-flag-517000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:97:28:41:74:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-flag-517000/disk.qcow2
	I0311 14:03:02.705946    4450 main.go:141] libmachine: STDOUT: 
	I0311 14:03:02.705964    4450 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:03:02.705978    4450 client.go:171] duration metric: took 237.394791ms to LocalClient.Create
	I0311 14:03:04.708097    4450 start.go:128] duration metric: took 2.296152666s to createHost
	I0311 14:03:04.708140    4450 start.go:83] releasing machines lock for "force-systemd-flag-517000", held for 2.296638792s
	W0311 14:03:04.708537    4450 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-517000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-517000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:03:04.722113    4450 out.go:177] 
	W0311 14:03:04.726307    4450 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:03:04.726348    4450 out.go:239] * 
	* 
	W0311 14:03:04.728851    4450 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:03:04.747656    4450 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-517000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-517000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-517000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.770959ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-517000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-517000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-517000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-03-11 14:03:04.843902 -0700 PDT m=+3208.188650168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-517000 -n force-systemd-flag-517000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-517000 -n force-systemd-flag-517000: exit status 7 (34.436333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-517000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-517000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-517000
--- FAIL: TestForceSystemdFlag (10.09s)

                                                
                                    
x
+
TestForceSystemdEnv (10.25s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-553000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-553000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.020646833s)

                                                
                                                
-- stdout --
	* [force-systemd-env-553000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-553000" primary control-plane node in "force-systemd-env-553000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-553000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:03:15.179087    4553 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:03:15.179239    4553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:03:15.179243    4553 out.go:304] Setting ErrFile to fd 2...
	I0311 14:03:15.179245    4553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:03:15.179390    4553 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:03:15.180695    4553 out.go:298] Setting JSON to false
	I0311 14:03:15.198782    4553 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3766,"bootTime":1710187229,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 14:03:15.198868    4553 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 14:03:15.204191    4553 out.go:177] * [force-systemd-env-553000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 14:03:15.211388    4553 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 14:03:15.215309    4553 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 14:03:15.211453    4553 notify.go:220] Checking for updates...
	I0311 14:03:15.221324    4553 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 14:03:15.222586    4553 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 14:03:15.225302    4553 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 14:03:15.228345    4553 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0311 14:03:15.231719    4553 config.go:182] Loaded profile config "multinode-457000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:03:15.231802    4553 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 14:03:15.239308    4553 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 14:03:15.246198    4553 start.go:297] selected driver: qemu2
	I0311 14:03:15.246213    4553 start.go:901] validating driver "qemu2" against <nil>
	I0311 14:03:15.246222    4553 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 14:03:15.248757    4553 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 14:03:15.251276    4553 out.go:177] * Automatically selected the socket_vmnet network
	I0311 14:03:15.254468    4553 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 14:03:15.254496    4553 cni.go:84] Creating CNI manager for ""
	I0311 14:03:15.254509    4553 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 14:03:15.254513    4553 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 14:03:15.254555    4553 start.go:340] cluster config:
	{Name:force-systemd-env-553000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-553000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:03:15.259596    4553 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:03:15.267302    4553 out.go:177] * Starting "force-systemd-env-553000" primary control-plane node in "force-systemd-env-553000" cluster
	I0311 14:03:15.271342    4553 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 14:03:15.271379    4553 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 14:03:15.271393    4553 cache.go:56] Caching tarball of preloaded images
	I0311 14:03:15.271482    4553 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 14:03:15.271491    4553 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 14:03:15.271556    4553 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/force-systemd-env-553000/config.json ...
	I0311 14:03:15.271568    4553 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/force-systemd-env-553000/config.json: {Name:mkcf1e83d3477efb59329f8f48d2547c0676c7ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 14:03:15.271791    4553 start.go:360] acquireMachinesLock for force-systemd-env-553000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:03:15.271821    4553 start.go:364] duration metric: took 22.791µs to acquireMachinesLock for "force-systemd-env-553000"
	I0311 14:03:15.271832    4553 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-553000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-553000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:03:15.271856    4553 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:03:15.280335    4553 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0311 14:03:15.296045    4553 start.go:159] libmachine.API.Create for "force-systemd-env-553000" (driver="qemu2")
	I0311 14:03:15.296079    4553 client.go:168] LocalClient.Create starting
	I0311 14:03:15.296143    4553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:03:15.296176    4553 main.go:141] libmachine: Decoding PEM data...
	I0311 14:03:15.296186    4553 main.go:141] libmachine: Parsing certificate...
	I0311 14:03:15.296232    4553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:03:15.296253    4553 main.go:141] libmachine: Decoding PEM data...
	I0311 14:03:15.296258    4553 main.go:141] libmachine: Parsing certificate...
	I0311 14:03:15.296605    4553 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:03:15.439937    4553 main.go:141] libmachine: Creating SSH key...
	I0311 14:03:15.605162    4553 main.go:141] libmachine: Creating Disk image...
	I0311 14:03:15.605177    4553 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:03:15.605350    4553 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-env-553000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-env-553000/disk.qcow2
	I0311 14:03:15.617657    4553 main.go:141] libmachine: STDOUT: 
	I0311 14:03:15.617679    4553 main.go:141] libmachine: STDERR: 
	I0311 14:03:15.617727    4553 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-env-553000/disk.qcow2 +20000M
	I0311 14:03:15.628529    4553 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:03:15.628546    4553 main.go:141] libmachine: STDERR: 
	I0311 14:03:15.628559    4553 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-env-553000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-env-553000/disk.qcow2
	I0311 14:03:15.628564    4553 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:03:15.628594    4553 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-env-553000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-env-553000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-env-553000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:b4:36:1c:75:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-env-553000/disk.qcow2
	I0311 14:03:15.630306    4553 main.go:141] libmachine: STDOUT: 
	I0311 14:03:15.630322    4553 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:03:15.630340    4553 client.go:171] duration metric: took 334.265792ms to LocalClient.Create
	I0311 14:03:17.632499    4553 start.go:128] duration metric: took 2.360696167s to createHost
	I0311 14:03:17.632624    4553 start.go:83] releasing machines lock for "force-systemd-env-553000", held for 2.360869791s
	W0311 14:03:17.632674    4553 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:03:17.641096    4553 out.go:177] * Deleting "force-systemd-env-553000" in qemu2 ...
	W0311 14:03:17.667226    4553 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:03:17.667260    4553 start.go:728] Will try again in 5 seconds ...
	I0311 14:03:22.669321    4553 start.go:360] acquireMachinesLock for force-systemd-env-553000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:03:22.669684    4553 start.go:364] duration metric: took 279.917µs to acquireMachinesLock for "force-systemd-env-553000"
	I0311 14:03:22.669824    4553 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-553000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-553000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:03:22.670114    4553 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:03:22.680780    4553 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0311 14:03:22.729630    4553 start.go:159] libmachine.API.Create for "force-systemd-env-553000" (driver="qemu2")
	I0311 14:03:22.729682    4553 client.go:168] LocalClient.Create starting
	I0311 14:03:22.729790    4553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:03:22.729851    4553 main.go:141] libmachine: Decoding PEM data...
	I0311 14:03:22.729870    4553 main.go:141] libmachine: Parsing certificate...
	I0311 14:03:22.729946    4553 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:03:22.729992    4553 main.go:141] libmachine: Decoding PEM data...
	I0311 14:03:22.730004    4553 main.go:141] libmachine: Parsing certificate...
	I0311 14:03:22.730515    4553 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:03:22.875634    4553 main.go:141] libmachine: Creating SSH key...
	I0311 14:03:23.094110    4553 main.go:141] libmachine: Creating Disk image...
	I0311 14:03:23.094123    4553 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:03:23.094331    4553 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-env-553000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-env-553000/disk.qcow2
	I0311 14:03:23.107156    4553 main.go:141] libmachine: STDOUT: 
	I0311 14:03:23.107184    4553 main.go:141] libmachine: STDERR: 
	I0311 14:03:23.107274    4553 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-env-553000/disk.qcow2 +20000M
	I0311 14:03:23.117936    4553 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:03:23.117966    4553 main.go:141] libmachine: STDERR: 
	I0311 14:03:23.117983    4553 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-env-553000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-env-553000/disk.qcow2
	I0311 14:03:23.117987    4553 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:03:23.118026    4553 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-env-553000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-env-553000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-env-553000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:d1:d3:be:3b:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/force-systemd-env-553000/disk.qcow2
	I0311 14:03:23.119758    4553 main.go:141] libmachine: STDOUT: 
	I0311 14:03:23.119772    4553 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:03:23.119789    4553 client.go:171] duration metric: took 390.114917ms to LocalClient.Create
	I0311 14:03:25.121901    4553 start.go:128] duration metric: took 2.451818083s to createHost
	I0311 14:03:25.121967    4553 start.go:83] releasing machines lock for "force-systemd-env-553000", held for 2.452332917s
	W0311 14:03:25.122386    4553 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-553000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-553000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:03:25.130956    4553 out.go:177] 
	W0311 14:03:25.137182    4553 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:03:25.137217    4553 out.go:239] * 
	* 
	W0311 14:03:25.139741    4553 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:03:25.149012    4553 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-553000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-553000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-553000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (83.19775ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-553000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-553000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-553000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-03-11 14:03:25.250081 -0700 PDT m=+3228.595485376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-553000 -n force-systemd-env-553000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-553000 -n force-systemd-env-553000: exit status 7 (37.271917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-553000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-553000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-553000
--- FAIL: TestForceSystemdEnv (10.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (32.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-503000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-503000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-bfsl2" [0d26fe5c-2a4d-4ae7-8b11-90a3e522753b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-bfsl2" [0d26fe5c-2a4d-4ae7-8b11-90a3e522753b] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004059s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:32290
functional_test.go:1657: error fetching http://192.168.105.4:32290: Get "http://192.168.105.4:32290": dial tcp 192.168.105.4:32290: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32290: Get "http://192.168.105.4:32290": dial tcp 192.168.105.4:32290: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32290: Get "http://192.168.105.4:32290": dial tcp 192.168.105.4:32290: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32290: Get "http://192.168.105.4:32290": dial tcp 192.168.105.4:32290: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32290: Get "http://192.168.105.4:32290": dial tcp 192.168.105.4:32290: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32290: Get "http://192.168.105.4:32290": dial tcp 192.168.105.4:32290: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:32290: Get "http://192.168.105.4:32290": dial tcp 192.168.105.4:32290: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:32290: Get "http://192.168.105.4:32290": dial tcp 192.168.105.4:32290: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-503000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-7799dfb7c6-bfsl2
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-503000/192.168.105.4
Start Time:       Mon, 11 Mar 2024 13:21:39 -0700
Labels:           app=hello-node-connect
pod-template-hash=7799dfb7c6
Annotations:      <none>
Status:           Running
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-7799dfb7c6
Containers:
echoserver-arm:
Container ID:   docker://9aa0e388bd31549fc60575dbd5e5f9b89b93f865baa9db5396c5a504698a5b21
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 11 Mar 2024 13:21:56 -0700
Finished:     Mon, 11 Mar 2024 13:21:56 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-phqlg (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-phqlg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  31s                default-scheduler  Successfully assigned default/hello-node-connect-7799dfb7c6-bfsl2 to functional-503000
Normal   Pulled     14s (x3 over 30s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    14s (x3 over 30s)  kubelet            Created container echoserver-arm
Normal   Started    14s (x3 over 30s)  kubelet            Started container echoserver-arm
Warning  BackOff    0s (x3 over 28s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-7799dfb7c6-bfsl2_default(0d26fe5c-2a4d-4ae7-8b11-90a3e522753b)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-503000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-503000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.97.80.235
IPs:                      10.97.80.235
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32290/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-503000 -n functional-503000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-503000 ssh findmnt                                                                                        | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:21 PDT | 11 Mar 24 13:21 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-503000 ssh -- ls                                                                                          | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:21 PDT | 11 Mar 24 13:21 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-503000 ssh cat                                                                                            | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:21 PDT | 11 Mar 24 13:21 PDT |
	|           | /mount-9p/test-1710188515806261000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-503000 ssh stat                                                                                           | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:22 PDT | 11 Mar 24 13:22 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-503000 ssh stat                                                                                           | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:22 PDT | 11 Mar 24 13:22 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-503000 ssh sudo                                                                                           | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:22 PDT | 11 Mar 24 13:22 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-503000 ssh findmnt                                                                                        | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:22 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-503000                                                                                                 | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:22 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3691847349/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-503000 ssh findmnt                                                                                        | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:22 PDT | 11 Mar 24 13:22 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-503000 ssh -- ls                                                                                          | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:22 PDT | 11 Mar 24 13:22 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-503000 ssh sudo                                                                                           | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:22 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-503000 ssh findmnt                                                                                        | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:22 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-503000                                                                                                 | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:22 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup287654964/001:/mount3    |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-503000                                                                                                 | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:22 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup287654964/001:/mount1    |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-503000                                                                                                 | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:22 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup287654964/001:/mount2    |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-503000 ssh findmnt                                                                                        | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:22 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-503000 ssh findmnt                                                                                        | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:22 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-503000 ssh findmnt                                                                                        | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:22 PDT | 11 Mar 24 13:22 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-503000 ssh findmnt                                                                                        | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:22 PDT | 11 Mar 24 13:22 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-503000 ssh findmnt                                                                                        | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:22 PDT | 11 Mar 24 13:22 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-503000                                                                                                 | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:22 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-503000                                                                                                 | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:22 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-503000 --dry-run                                                                                       | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:22 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-503000                                                                                                 | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:22 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-503000 | jenkins | v1.32.0 | 11 Mar 24 13:22 PDT |                     |
	|           | -p functional-503000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 13:22:09
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 13:22:09.598000    2640 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:22:09.598110    2640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:22:09.598115    2640 out.go:304] Setting ErrFile to fd 2...
	I0311 13:22:09.598117    2640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:22:09.598241    2640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:22:09.599678    2640 out.go:298] Setting JSON to false
	I0311 13:22:09.617001    2640 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1300,"bootTime":1710187229,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 13:22:09.617105    2640 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 13:22:09.621578    2640 out.go:177] * [functional-503000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	I0311 13:22:09.628548    2640 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 13:22:09.632551    2640 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:22:09.628579    2640 notify.go:220] Checking for updates...
	I0311 13:22:09.638572    2640 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 13:22:09.641614    2640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:22:09.643005    2640 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 13:22:09.646541    2640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 13:22:09.649886    2640 config.go:182] Loaded profile config "functional-503000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:22:09.650122    2640 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:22:09.654417    2640 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0311 13:22:09.661573    2640 start.go:297] selected driver: qemu2
	I0311 13:22:09.661579    2640 start.go:901] validating driver "qemu2" against &{Name:functional-503000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-503000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:22:09.661627    2640 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 13:22:09.667489    2640 out.go:177] 
	W0311 13:22:09.675549    2640 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0311 13:22:09.679560    2640 out.go:177] 
	
	
	==> Docker <==
	Mar 11 20:22:02 functional-503000 dockerd[7601]: time="2024-03-11T20:22:02.569637008Z" level=info msg="ignoring event" container=ae62970f69d10cf5e6ff5d6664d695818a7faaf42c3f68033d9ff36148b62cdd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 11 20:22:03 functional-503000 cri-dockerd[7804]: time="2024-03-11T20:22:03Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Mar 11 20:22:03 functional-503000 dockerd[7607]: time="2024-03-11T20:22:03.679907944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 11 20:22:03 functional-503000 dockerd[7607]: time="2024-03-11T20:22:03.679990840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 11 20:22:03 functional-503000 dockerd[7607]: time="2024-03-11T20:22:03.680157173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 20:22:03 functional-503000 dockerd[7607]: time="2024-03-11T20:22:03.680222898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 20:22:03 functional-503000 dockerd[7601]: time="2024-03-11T20:22:03.714082087Z" level=info msg="ignoring event" container=c002bc52bc28286fcdd168be540a079cd895d938de70f244e8c04c628198c31d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 11 20:22:03 functional-503000 dockerd[7607]: time="2024-03-11T20:22:03.714359448Z" level=info msg="shim disconnected" id=c002bc52bc28286fcdd168be540a079cd895d938de70f244e8c04c628198c31d namespace=moby
	Mar 11 20:22:03 functional-503000 dockerd[7607]: time="2024-03-11T20:22:03.714408877Z" level=warning msg="cleaning up after shim disconnected" id=c002bc52bc28286fcdd168be540a079cd895d938de70f244e8c04c628198c31d namespace=moby
	Mar 11 20:22:03 functional-503000 dockerd[7607]: time="2024-03-11T20:22:03.714413253Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 11 20:22:05 functional-503000 dockerd[7601]: time="2024-03-11T20:22:05.203002071Z" level=info msg="ignoring event" container=9c3539577820c059122e52ecaf325e0bc63b9231ac184535510eea2a1508a897 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 11 20:22:05 functional-503000 dockerd[7607]: time="2024-03-11T20:22:05.203196953Z" level=info msg="shim disconnected" id=9c3539577820c059122e52ecaf325e0bc63b9231ac184535510eea2a1508a897 namespace=moby
	Mar 11 20:22:05 functional-503000 dockerd[7607]: time="2024-03-11T20:22:05.203328444Z" level=warning msg="cleaning up after shim disconnected" id=9c3539577820c059122e52ecaf325e0bc63b9231ac184535510eea2a1508a897 namespace=moby
	Mar 11 20:22:05 functional-503000 dockerd[7607]: time="2024-03-11T20:22:05.203338238Z" level=info msg="cleaning up dead shim" namespace=moby
	Mar 11 20:22:10 functional-503000 dockerd[7607]: time="2024-03-11T20:22:10.644503231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 11 20:22:10 functional-503000 dockerd[7607]: time="2024-03-11T20:22:10.644683567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 11 20:22:10 functional-503000 dockerd[7607]: time="2024-03-11T20:22:10.644696237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 20:22:10 functional-503000 dockerd[7607]: time="2024-03-11T20:22:10.644760336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 20:22:10 functional-503000 dockerd[7607]: time="2024-03-11T20:22:10.662074523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 11 20:22:10 functional-503000 dockerd[7607]: time="2024-03-11T20:22:10.662139122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 11 20:22:10 functional-503000 dockerd[7607]: time="2024-03-11T20:22:10.662149958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 20:22:10 functional-503000 dockerd[7607]: time="2024-03-11T20:22:10.662187759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 20:22:10 functional-503000 cri-dockerd[7804]: time="2024-03-11T20:22:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/edecdf9ce5cd002f5d0e028abf3064698c23aab2556d356f88b7dc8393ca56b2/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 11 20:22:10 functional-503000 cri-dockerd[7804]: time="2024-03-11T20:22:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/611cb697b5aae20d28a5faeb635e2cb96e8a009446d154d569ea6ad9781f71e3/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 11 20:22:10 functional-503000 dockerd[7601]: time="2024-03-11T20:22:10.970850377Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c002bc52bc282       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   8 seconds ago        Exited              mount-munger              0                   9c3539577820c       busybox-mount
	ae62970f69d10       72565bf5bbedf                                                                                         9 seconds ago        Exited              echoserver-arm            3                   186a391d410d7       hello-node-759d89bdcc-rwnn9
	9aa0e388bd315       72565bf5bbedf                                                                                         15 seconds ago       Exited              echoserver-arm            2                   d3376d6f94952       hello-node-connect-7799dfb7c6-bfsl2
	2dccd5475cf8f       nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107                         22 seconds ago       Running             myfrontend                0                   0fc8ebd26006d       sp-pod
	fc594fd6ff93a       nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9                         39 seconds ago       Running             nginx                     0                   3bb6539ddf9f7       nginx-svc
	23a0319d360c6       97e04611ad434                                                                                         About a minute ago   Running             coredns                   2                   0172e04210d39       coredns-5dd5756b68-jvqx5
	c0f31542dcf8b       3ca3ca488cf13                                                                                         About a minute ago   Running             kube-proxy                2                   fc92bbd09da33       kube-proxy-cf2jp
	e85fbbbea0b4f       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   11c027a0ad43c       storage-provisioner
	3942ab4d39ee4       9cdd6470f48c8                                                                                         About a minute ago   Running             etcd                      2                   9ebcdf39222e9       etcd-functional-503000
	eb2e73b072cee       05c284c929889                                                                                         About a minute ago   Running             kube-scheduler            2                   92a84796ace04       kube-scheduler-functional-503000
	f969c9c4aafcd       9961cbceaf234                                                                                         About a minute ago   Running             kube-controller-manager   2                   3b76bf98de13c       kube-controller-manager-functional-503000
	1aa588e12c58b       04b4c447bb9d4                                                                                         About a minute ago   Running             kube-apiserver            0                   891586b8514b1       kube-apiserver-functional-503000
	888d4dc170a7d       97e04611ad434                                                                                         2 minutes ago        Exited              coredns                   1                   87fc836659d18       coredns-5dd5756b68-jvqx5
	f91b1ec07d507       ba04bb24b9575                                                                                         2 minutes ago        Exited              storage-provisioner       1                   a21bbf390fe99       storage-provisioner
	5e023b8b51492       3ca3ca488cf13                                                                                         2 minutes ago        Exited              kube-proxy                1                   fcf771cda75a0       kube-proxy-cf2jp
	c0e9c9014a8e5       05c284c929889                                                                                         2 minutes ago        Exited              kube-scheduler            1                   3b8d6ac9942df       kube-scheduler-functional-503000
	3001f0c478a54       9cdd6470f48c8                                                                                         2 minutes ago        Exited              etcd                      1                   ff0be74642746       etcd-functional-503000
	f467d445d6094       9961cbceaf234                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   4706ce6e322bc       kube-controller-manager-functional-503000
	
	
	==> coredns [23a0319d360c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36945 - 32767 "HINFO IN 8330656942371718639.365856969536690493. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.004166885s
	[INFO] 10.244.0.1:1779 - 24061 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000092608s
	[INFO] 10.244.0.1:62609 - 7157 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000106736s
	[INFO] 10.244.0.1:6323 - 59282 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.000895734s
	[INFO] 10.244.0.1:15043 - 7554 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000043761s
	[INFO] 10.244.0.1:54241 - 1582 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000055556s
	[INFO] 10.244.0.1:15984 - 38442 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.00008098s
	
	
	==> coredns [888d4dc170a7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54105 - 974 "HINFO IN 8174076889643641999.8788559853991593968. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004155341s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-503000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-503000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=functional-503000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T13_18_59_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 20:18:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-503000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 20:22:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 20:21:52 +0000   Mon, 11 Mar 2024 20:18:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 20:21:52 +0000   Mon, 11 Mar 2024 20:18:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 20:21:52 +0000   Mon, 11 Mar 2024 20:18:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 20:21:52 +0000   Mon, 11 Mar 2024 20:19:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-503000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904752Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7c15836bae14c969a90dd3a17aa3d5f
	  System UUID:                d7c15836bae14c969a90dd3a17aa3d5f
	  Boot ID:                    4486fd4c-2c02-44a3-8143-0b04981a9319
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-759d89bdcc-rwnn9                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  default                     hello-node-connect-7799dfb7c6-bfsl2           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  default                     nginx-svc                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  default                     sp-pod                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 coredns-5dd5756b68-jvqx5                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m59s
	  kube-system                 etcd-functional-503000                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m12s
	  kube-system                 kube-apiserver-functional-503000              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-controller-manager-functional-503000     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m12s
	  kube-system                 kube-proxy-cf2jp                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m59s
	  kube-system                 kube-scheduler-functional-503000              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m58s
	  kubernetes-dashboard        dashboard-metrics-scraper-7fd5cb4ddc-qlxq2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-t2nrm         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m58s                  kube-proxy       
	  Normal  Starting                 79s                    kube-proxy       
	  Normal  Starting                 2m9s                   kube-proxy       
	  Normal  Starting                 3m13s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m12s                  kubelet          Node functional-503000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m12s                  kubelet          Node functional-503000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m12s                  kubelet          Node functional-503000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m9s                   kubelet          Node functional-503000 status is now: NodeReady
	  Normal  RegisteredNode           3m                     node-controller  Node functional-503000 event: Registered Node functional-503000 in Controller
	  Normal  NodeAllocatableEnforced  2m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m13s (x8 over 2m13s)  kubelet          Node functional-503000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m13s (x8 over 2m13s)  kubelet          Node functional-503000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m13s (x7 over 2m13s)  kubelet          Node functional-503000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m13s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           118s                   node-controller  Node functional-503000 event: Registered Node functional-503000 in Controller
	  Normal  Starting                 84s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  84s (x8 over 84s)      kubelet          Node functional-503000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s (x8 over 84s)      kubelet          Node functional-503000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s (x7 over 84s)      kubelet          Node functional-503000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  84s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           68s                    node-controller  Node functional-503000 event: Registered Node functional-503000 in Controller
	
	
	==> dmesg <==
	[ +17.657659] systemd-fstab-generator[7142]: Ignoring "noauto" option for root device
	[  +0.052131] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.092515] systemd-fstab-generator[7177]: Ignoring "noauto" option for root device
	[  +0.093657] systemd-fstab-generator[7189]: Ignoring "noauto" option for root device
	[  +0.093812] systemd-fstab-generator[7203]: Ignoring "noauto" option for root device
	[  +5.096548] kauditd_printk_skb: 89 callbacks suppressed
	[  +6.316389] systemd-fstab-generator[7752]: Ignoring "noauto" option for root device
	[  +0.072668] systemd-fstab-generator[7764]: Ignoring "noauto" option for root device
	[  +0.061096] systemd-fstab-generator[7776]: Ignoring "noauto" option for root device
	[  +0.078895] systemd-fstab-generator[7791]: Ignoring "noauto" option for root device
	[  +0.224925] systemd-fstab-generator[7945]: Ignoring "noauto" option for root device
	[  +0.810497] systemd-fstab-generator[8066]: Ignoring "noauto" option for root device
	[  +4.480592] kauditd_printk_skb: 202 callbacks suppressed
	[Mar11 20:21] systemd-fstab-generator[9277]: Ignoring "noauto" option for root device
	[  +0.056519] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.309463] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.314933] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.313983] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.674586] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.017527] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.420384] kauditd_printk_skb: 8 callbacks suppressed
	[  +7.640794] kauditd_printk_skb: 23 callbacks suppressed
	[  +9.172291] kauditd_printk_skb: 11 callbacks suppressed
	[Mar11 20:22] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.777675] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [3001f0c478a5] <==
	{"level":"info","ts":"2024-03-11T20:19:58.807576Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-11T20:19:59.997049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-11T20:19:59.997216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-11T20:19:59.997298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-03-11T20:19:59.997337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-03-11T20:19:59.997402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-03-11T20:19:59.997458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-03-11T20:19:59.997512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-03-11T20:20:00.00241Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T20:20:00.002426Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-503000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-11T20:20:00.003091Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T20:20:00.005286Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-03-11T20:20:00.005304Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-11T20:20:00.005442Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-11T20:20:00.006365Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-11T20:20:34.814861Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-11T20:20:34.814884Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-503000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-03-11T20:20:34.814916Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-11T20:20:34.814951Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-11T20:20:34.830572Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-11T20:20:34.830594Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-11T20:20:34.834516Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-03-11T20:20:34.83633Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-03-11T20:20:34.836371Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-03-11T20:20:34.836375Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-503000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [3942ab4d39ee] <==
	{"level":"info","ts":"2024-03-11T20:20:48.980538Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-11T20:20:48.980558Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-11T20:20:48.980663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-03-11T20:20:48.980701Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-03-11T20:20:48.980764Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T20:20:48.98079Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T20:20:48.982279Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-11T20:20:48.982332Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-03-11T20:20:48.984063Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-03-11T20:20:48.984188Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-11T20:20:48.984216Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-11T20:20:50.231183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-03-11T20:20:50.231386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-03-11T20:20:50.231472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-03-11T20:20:50.231507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-03-11T20:20:50.231581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-03-11T20:20:50.231802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-03-11T20:20:50.232264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-03-11T20:20:50.237471Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-503000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-11T20:20:50.237649Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T20:20:50.238428Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-11T20:20:50.238469Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-11T20:20:50.238565Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T20:20:50.241593Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-11T20:20:50.269472Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> kernel <==
	 20:22:11 up 3 min,  0 users,  load average: 0.50, 0.40, 0.18
	Linux functional-503000 5.10.207 #1 SMP PREEMPT Thu Feb 22 23:40:42 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1aa588e12c58] <==
	I0311 20:20:50.910026       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0311 20:20:50.910186       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0311 20:20:50.910466       1 aggregator.go:166] initial CRD sync complete...
	I0311 20:20:50.910473       1 autoregister_controller.go:141] Starting autoregister controller
	I0311 20:20:50.910475       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0311 20:20:50.910477       1 cache.go:39] Caches are synced for autoregister controller
	I0311 20:20:50.910542       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0311 20:20:50.912808       1 shared_informer.go:318] Caches are synced for configmaps
	I0311 20:20:50.913927       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0311 20:20:51.828543       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0311 20:20:52.148451       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0311 20:20:52.151693       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0311 20:20:52.171140       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0311 20:20:52.179064       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0311 20:20:52.183152       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0311 20:21:03.568012       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0311 20:21:03.667508       1 controller.go:624] quota admission added evaluator for: endpoints
	I0311 20:21:07.657516       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.165.158"}
	I0311 20:21:14.275536       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0311 20:21:14.318757       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.26.65"}
	I0311 20:21:29.307225       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.110.223.246"}
	I0311 20:21:39.749318       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.80.235"}
	I0311 20:22:10.231706       1 controller.go:624] quota admission added evaluator for: namespaces
	I0311 20:22:10.329388       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.199.156"}
	I0311 20:22:10.338142       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.20.78"}
	
	
	==> kube-controller-manager [f467d445d609] <==
	I0311 20:20:13.121996       1 range_allocator.go:174] "Sending events to api server"
	I0311 20:20:13.122004       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0311 20:20:13.122006       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0311 20:20:13.122008       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0311 20:20:13.122196       1 shared_informer.go:318] Caches are synced for attach detach
	I0311 20:20:13.122935       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0311 20:20:13.123886       1 shared_informer.go:318] Caches are synced for crt configmap
	I0311 20:20:13.125963       1 shared_informer.go:318] Caches are synced for endpoint
	I0311 20:20:13.125999       1 shared_informer.go:318] Caches are synced for daemon sets
	I0311 20:20:13.126758       1 shared_informer.go:318] Caches are synced for deployment
	I0311 20:20:13.128067       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0311 20:20:13.128090       1 shared_informer.go:318] Caches are synced for cronjob
	I0311 20:20:13.129202       1 shared_informer.go:318] Caches are synced for taint
	I0311 20:20:13.129250       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0311 20:20:13.129287       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0311 20:20:13.129297       1 taint_manager.go:210] "Sending events to api server"
	I0311 20:20:13.129341       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-503000"
	I0311 20:20:13.129398       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0311 20:20:13.129457       1 event.go:307] "Event occurred" object="functional-503000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-503000 event: Registered Node functional-503000 in Controller"
	I0311 20:20:13.221730       1 shared_informer.go:318] Caches are synced for resource quota
	I0311 20:20:13.251277       1 shared_informer.go:318] Caches are synced for HPA
	I0311 20:20:13.320395       1 shared_informer.go:318] Caches are synced for resource quota
	I0311 20:20:13.629590       1 shared_informer.go:318] Caches are synced for garbage collector
	I0311 20:20:13.645592       1 shared_informer.go:318] Caches are synced for garbage collector
	I0311 20:20:13.645613       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [f969c9c4aafc] <==
	I0311 20:22:10.266006       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="14.016665ms"
	E0311 20:22:10.266017       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0311 20:22:10.270019       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="3.990442ms"
	E0311 20:22:10.270028       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0311 20:22:10.270352       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0311 20:22:10.270553       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0311 20:22:10.274853       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="3.497529ms"
	E0311 20:22:10.275034       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0311 20:22:10.275020       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0311 20:22:10.276840       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.294285ms"
	E0311 20:22:10.276851       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0311 20:22:10.279493       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="2.630691ms"
	E0311 20:22:10.279505       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0311 20:22:10.279521       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0311 20:22:10.294011       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-t2nrm"
	I0311 20:22:10.298342       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.920648ms"
	I0311 20:22:10.303304       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7fd5cb4ddc-qlxq2"
	I0311 20:22:10.308981       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="13.509541ms"
	I0311 20:22:10.316257       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="7.185231ms"
	I0311 20:22:10.316306       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.949468ms"
	I0311 20:22:10.316791       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="19.297µs"
	I0311 20:22:10.317011       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="698.089µs"
	I0311 20:22:10.317098       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="73.56µs"
	I0311 20:22:10.319205       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="14.753µs"
	I0311 20:22:10.524586       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="27.674µs"
	
	
	==> kube-proxy [5e023b8b5149] <==
	I0311 20:20:01.483657       1 server_others.go:69] "Using iptables proxy"
	I0311 20:20:01.511130       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0311 20:20:01.530885       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0311 20:20:01.530901       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 20:20:01.531879       1 server_others.go:152] "Using iptables Proxier"
	I0311 20:20:01.531898       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 20:20:01.531982       1 server.go:846] "Version info" version="v1.28.4"
	I0311 20:20:01.531986       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 20:20:01.532443       1 config.go:188] "Starting service config controller"
	I0311 20:20:01.532447       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 20:20:01.532453       1 config.go:97] "Starting endpoint slice config controller"
	I0311 20:20:01.532455       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 20:20:01.532568       1 config.go:315] "Starting node config controller"
	I0311 20:20:01.532570       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 20:20:01.632682       1 shared_informer.go:318] Caches are synced for node config
	I0311 20:20:01.632686       1 shared_informer.go:318] Caches are synced for service config
	I0311 20:20:01.632698       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [c0f31542dcf8] <==
	I0311 20:20:52.070929       1 server_others.go:69] "Using iptables proxy"
	I0311 20:20:52.092335       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0311 20:20:52.109560       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0311 20:20:52.109572       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0311 20:20:52.116927       1 server_others.go:152] "Using iptables Proxier"
	I0311 20:20:52.117162       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 20:20:52.118272       1 server.go:846] "Version info" version="v1.28.4"
	I0311 20:20:52.118280       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 20:20:52.122112       1 config.go:188] "Starting service config controller"
	I0311 20:20:52.122125       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 20:20:52.122133       1 config.go:97] "Starting endpoint slice config controller"
	I0311 20:20:52.124594       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 20:20:52.127197       1 config.go:315] "Starting node config controller"
	I0311 20:20:52.128802       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 20:20:52.222484       1 shared_informer.go:318] Caches are synced for service config
	I0311 20:20:52.227599       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0311 20:20:52.228971       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [c0e9c9014a8e] <==
	I0311 20:19:59.254585       1 serving.go:348] Generated self-signed cert in-memory
	W0311 20:20:00.596422       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0311 20:20:00.596439       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0311 20:20:00.596444       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0311 20:20:00.596446       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0311 20:20:00.643461       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0311 20:20:00.643475       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 20:20:00.644423       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0311 20:20:00.644481       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0311 20:20:00.644490       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0311 20:20:00.644521       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0311 20:20:00.745319       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0311 20:20:34.817849       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0311 20:20:34.817868       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0311 20:20:34.817934       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [eb2e73b072ce] <==
	I0311 20:20:48.646761       1 serving.go:348] Generated self-signed cert in-memory
	I0311 20:20:50.884770       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0311 20:20:50.884783       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 20:20:50.886335       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0311 20:20:50.886356       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0311 20:20:50.886374       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0311 20:20:50.886387       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0311 20:20:50.886409       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0311 20:20:50.886422       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0311 20:20:50.886809       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0311 20:20:50.887377       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0311 20:20:50.987275       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0311 20:20:50.987275       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0311 20:20:50.987286       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 11 20:21:57 functional-503000 kubelet[8073]: E0311 20:21:57.087120    8073 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-bfsl2_default(0d26fe5c-2a4d-4ae7-8b11-90a3e522753b)\"" pod="default/hello-node-connect-7799dfb7c6-bfsl2" podUID="0d26fe5c-2a4d-4ae7-8b11-90a3e522753b"
	Mar 11 20:21:57 functional-503000 kubelet[8073]: I0311 20:21:57.644047    8073 topology_manager.go:215] "Topology Admit Handler" podUID="ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d" podNamespace="default" podName="busybox-mount"
	Mar 11 20:21:57 functional-503000 kubelet[8073]: I0311 20:21:57.802359    8073 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d-test-volume\") pod \"busybox-mount\" (UID: \"ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d\") " pod="default/busybox-mount"
	Mar 11 20:21:57 functional-503000 kubelet[8073]: I0311 20:21:57.802401    8073 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5pb9\" (UniqueName: \"kubernetes.io/projected/ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d-kube-api-access-j5pb9\") pod \"busybox-mount\" (UID: \"ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d\") " pod="default/busybox-mount"
	Mar 11 20:22:02 functional-503000 kubelet[8073]: I0311 20:22:02.515854    8073 scope.go:117] "RemoveContainer" containerID="131b54851d615e808017fa329840519d8299ac96b16ba14665960a25cb06c7bd"
	Mar 11 20:22:03 functional-503000 kubelet[8073]: I0311 20:22:03.117727    8073 scope.go:117] "RemoveContainer" containerID="131b54851d615e808017fa329840519d8299ac96b16ba14665960a25cb06c7bd"
	Mar 11 20:22:03 functional-503000 kubelet[8073]: I0311 20:22:03.117869    8073 scope.go:117] "RemoveContainer" containerID="ae62970f69d10cf5e6ff5d6664d695818a7faaf42c3f68033d9ff36148b62cdd"
	Mar 11 20:22:03 functional-503000 kubelet[8073]: E0311 20:22:03.117959    8073 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-rwnn9_default(9b980977-07dd-4758-8d29-b15632e9f334)\"" pod="default/hello-node-759d89bdcc-rwnn9" podUID="9b980977-07dd-4758-8d29-b15632e9f334"
	Mar 11 20:22:05 functional-503000 kubelet[8073]: I0311 20:22:05.340709    8073 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5pb9\" (UniqueName: \"kubernetes.io/projected/ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d-kube-api-access-j5pb9\") pod \"ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d\" (UID: \"ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d\") "
	Mar 11 20:22:05 functional-503000 kubelet[8073]: I0311 20:22:05.340751    8073 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d-test-volume\") pod \"ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d\" (UID: \"ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d\") "
	Mar 11 20:22:05 functional-503000 kubelet[8073]: I0311 20:22:05.340784    8073 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d-test-volume" (OuterVolumeSpecName: "test-volume") pod "ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d" (UID: "ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Mar 11 20:22:05 functional-503000 kubelet[8073]: I0311 20:22:05.343371    8073 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d-kube-api-access-j5pb9" (OuterVolumeSpecName: "kube-api-access-j5pb9") pod "ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d" (UID: "ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d"). InnerVolumeSpecName "kube-api-access-j5pb9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 11 20:22:05 functional-503000 kubelet[8073]: I0311 20:22:05.441094    8073 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-j5pb9\" (UniqueName: \"kubernetes.io/projected/ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d-kube-api-access-j5pb9\") on node \"functional-503000\" DevicePath \"\""
	Mar 11 20:22:05 functional-503000 kubelet[8073]: I0311 20:22:05.441110    8073 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d-test-volume\") on node \"functional-503000\" DevicePath \"\""
	Mar 11 20:22:06 functional-503000 kubelet[8073]: I0311 20:22:06.139815    8073 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c3539577820c059122e52ecaf325e0bc63b9231ac184535510eea2a1508a897"
	Mar 11 20:22:10 functional-503000 kubelet[8073]: I0311 20:22:10.297531    8073 topology_manager.go:215] "Topology Admit Handler" podUID="648a78ab-bd0a-4455-8bae-c72c91c937e8" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-t2nrm"
	Mar 11 20:22:10 functional-503000 kubelet[8073]: E0311 20:22:10.297567    8073 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d" containerName="mount-munger"
	Mar 11 20:22:10 functional-503000 kubelet[8073]: I0311 20:22:10.297586    8073 memory_manager.go:346] "RemoveStaleState removing state" podUID="ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d" containerName="mount-munger"
	Mar 11 20:22:10 functional-503000 kubelet[8073]: I0311 20:22:10.308650    8073 topology_manager.go:215] "Topology Admit Handler" podUID="26691386-a04b-4b72-b4e7-7a634d4e6327" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-7fd5cb4ddc-qlxq2"
	Mar 11 20:22:10 functional-503000 kubelet[8073]: I0311 20:22:10.467485    8073 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/26691386-a04b-4b72-b4e7-7a634d4e6327-tmp-volume\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-qlxq2\" (UID: \"26691386-a04b-4b72-b4e7-7a634d4e6327\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-qlxq2"
	Mar 11 20:22:10 functional-503000 kubelet[8073]: I0311 20:22:10.467517    8073 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gkpp\" (UniqueName: \"kubernetes.io/projected/648a78ab-bd0a-4455-8bae-c72c91c937e8-kube-api-access-8gkpp\") pod \"kubernetes-dashboard-8694d4445c-t2nrm\" (UID: \"648a78ab-bd0a-4455-8bae-c72c91c937e8\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-t2nrm"
	Mar 11 20:22:10 functional-503000 kubelet[8073]: I0311 20:22:10.467529    8073 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/648a78ab-bd0a-4455-8bae-c72c91c937e8-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-t2nrm\" (UID: \"648a78ab-bd0a-4455-8bae-c72c91c937e8\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-t2nrm"
	Mar 11 20:22:10 functional-503000 kubelet[8073]: I0311 20:22:10.467542    8073 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wgnb\" (UniqueName: \"kubernetes.io/projected/26691386-a04b-4b72-b4e7-7a634d4e6327-kube-api-access-9wgnb\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-qlxq2\" (UID: \"26691386-a04b-4b72-b4e7-7a634d4e6327\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-qlxq2"
	Mar 11 20:22:10 functional-503000 kubelet[8073]: I0311 20:22:10.514930    8073 scope.go:117] "RemoveContainer" containerID="9aa0e388bd31549fc60575dbd5e5f9b89b93f865baa9db5396c5a504698a5b21"
	Mar 11 20:22:10 functional-503000 kubelet[8073]: E0311 20:22:10.515050    8073 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-bfsl2_default(0d26fe5c-2a4d-4ae7-8b11-90a3e522753b)\"" pod="default/hello-node-connect-7799dfb7c6-bfsl2" podUID="0d26fe5c-2a4d-4ae7-8b11-90a3e522753b"
	
	
	==> storage-provisioner [e85fbbbea0b4] <==
	I0311 20:20:52.045392       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0311 20:20:52.057040       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0311 20:20:52.057068       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0311 20:21:09.478458       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0311 20:21:09.478554       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-503000_57595414-9642-4c9d-bc31-10fe5132ce19!
	I0311 20:21:09.478916       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3787b78f-f9f9-4031-adae-4734d7b1efb1", APIVersion:"v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-503000_57595414-9642-4c9d-bc31-10fe5132ce19 became leader
	I0311 20:21:09.579059       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-503000_57595414-9642-4c9d-bc31-10fe5132ce19!
	I0311 20:21:36.136942       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0311 20:21:36.137329       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"c74ce817-5610-4a0a-bb64-21897c772329", APIVersion:"v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0311 20:21:36.136971       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    b8676d04-be54-4f54-bef0-24a1ee81d775 362 0 2024-03-11 20:19:13 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-03-11 20:19:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-c74ce817-5610-4a0a-bb64-21897c772329 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  c74ce817-5610-4a0a-bb64-21897c772329 726 0 2024-03-11 20:21:36 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-03-11 20:21:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-03-11 20:21:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0311 20:21:36.138095       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-c74ce817-5610-4a0a-bb64-21897c772329" provisioned
	I0311 20:21:36.138143       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0311 20:21:36.138163       1 volume_store.go:212] Trying to save persistentvolume "pvc-c74ce817-5610-4a0a-bb64-21897c772329"
	I0311 20:21:36.141720       1 volume_store.go:219] persistentvolume "pvc-c74ce817-5610-4a0a-bb64-21897c772329" saved
	I0311 20:21:36.141884       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"c74ce817-5610-4a0a-bb64-21897c772329", APIVersion:"v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-c74ce817-5610-4a0a-bb64-21897c772329
	
	
	==> storage-provisioner [f91b1ec07d50] <==
	I0311 20:20:01.507148       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0311 20:20:01.513414       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0311 20:20:01.514081       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0311 20:20:18.898643       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0311 20:20:18.898783       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3787b78f-f9f9-4031-adae-4734d7b1efb1", APIVersion:"v1", ResourceVersion:"509", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-503000_943feeb4-9bf3-41b7-a599-952d9b905f78 became leader
	I0311 20:20:18.898821       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-503000_943feeb4-9bf3-41b7-a599-952d9b905f78!
	I0311 20:20:18.999519       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-503000_943feeb4-9bf3-41b7-a599-952d9b905f78!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-503000 -n functional-503000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-503000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-7fd5cb4ddc-qlxq2 kubernetes-dashboard-8694d4445c-t2nrm
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-503000 describe pod busybox-mount dashboard-metrics-scraper-7fd5cb4ddc-qlxq2 kubernetes-dashboard-8694d4445c-t2nrm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-503000 describe pod busybox-mount dashboard-metrics-scraper-7fd5cb4ddc-qlxq2 kubernetes-dashboard-8694d4445c-t2nrm: exit status 1 (42.188833ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-503000/192.168.105.4
	Start Time:       Mon, 11 Mar 2024 13:21:57 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://c002bc52bc28286fcdd168be540a079cd895d938de70f244e8c04c628198c31d
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 11 Mar 2024 13:22:03 -0700
	      Finished:     Mon, 11 Mar 2024 13:22:03 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j5pb9 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-j5pb9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  14s   default-scheduler  Successfully assigned default/busybox-mount to functional-503000
	  Normal  Pulling    13s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     8s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 5.582s (5.582s including waiting)
	  Normal  Created    8s    kubelet            Created container mount-munger
	  Normal  Started    8s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-7fd5cb4ddc-qlxq2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-t2nrm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-503000 describe pod busybox-mount dashboard-metrics-scraper-7fd5cb4ddc-qlxq2 kubernetes-dashboard-8694d4445c-t2nrm: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (32.26s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (214.16s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-674000 node stop m02 -v=7 --alsologtostderr: (12.190094042s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 status -v=7 --alsologtostderr
E0311 13:28:58.066570    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
E0311 13:29:29.077600    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-674000 status -v=7 --alsologtostderr: exit status 7 (2m56.012999666s)

                                                
                                                
-- stdout --
	ha-674000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-674000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-674000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-674000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:28:16.608226    3014 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:28:16.608571    3014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:28:16.608574    3014 out.go:304] Setting ErrFile to fd 2...
	I0311 13:28:16.608577    3014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:28:16.608745    3014 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:28:16.608883    3014 out.go:298] Setting JSON to false
	I0311 13:28:16.608894    3014 mustload.go:65] Loading cluster: ha-674000
	I0311 13:28:16.608959    3014 notify.go:220] Checking for updates...
	I0311 13:28:16.609107    3014 config.go:182] Loaded profile config "ha-674000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:28:16.609113    3014 status.go:255] checking status of ha-674000 ...
	I0311 13:28:16.609886    3014 status.go:330] ha-674000 host status = "Running" (err=<nil>)
	I0311 13:28:16.609912    3014 host.go:66] Checking if "ha-674000" exists ...
	I0311 13:28:16.610031    3014 host.go:66] Checking if "ha-674000" exists ...
	I0311 13:28:16.610134    3014 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 13:28:16.610143    3014 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000/id_rsa Username:docker}
	W0311 13:28:42.576482    3014 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0311 13:28:42.576639    3014 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0311 13:28:42.576671    3014 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0311 13:28:42.576687    3014 status.go:257] ha-674000 status: &{Name:ha-674000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0311 13:28:42.576723    3014 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0311 13:28:42.576734    3014 status.go:255] checking status of ha-674000-m02 ...
	I0311 13:28:42.577157    3014 status.go:330] ha-674000-m02 host status = "Stopped" (err=<nil>)
	I0311 13:28:42.577167    3014 status.go:343] host is not running, skipping remaining checks
	I0311 13:28:42.577172    3014 status.go:257] ha-674000-m02 status: &{Name:ha-674000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:28:42.577182    3014 status.go:255] checking status of ha-674000-m03 ...
	I0311 13:28:42.578440    3014 status.go:330] ha-674000-m03 host status = "Running" (err=<nil>)
	I0311 13:28:42.578450    3014 host.go:66] Checking if "ha-674000-m03" exists ...
	I0311 13:28:42.578588    3014 host.go:66] Checking if "ha-674000-m03" exists ...
	I0311 13:28:42.578741    3014 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 13:28:42.578750    3014 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000-m03/id_rsa Username:docker}
	W0311 13:29:57.579076    3014 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0311 13:29:57.579146    3014 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0311 13:29:57.579155    3014 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0311 13:29:57.579159    3014 status.go:257] ha-674000-m03 status: &{Name:ha-674000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0311 13:29:57.579167    3014 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0311 13:29:57.579171    3014 status.go:255] checking status of ha-674000-m04 ...
	I0311 13:29:57.580035    3014 status.go:330] ha-674000-m04 host status = "Running" (err=<nil>)
	I0311 13:29:57.580044    3014 host.go:66] Checking if "ha-674000-m04" exists ...
	I0311 13:29:57.580130    3014 host.go:66] Checking if "ha-674000-m04" exists ...
	I0311 13:29:57.580239    3014 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 13:29:57.580246    3014 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000-m04/id_rsa Username:docker}
	W0311 13:31:12.580307    3014 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0311 13:31:12.580354    3014 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0311 13:31:12.580363    3014 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0311 13:31:12.580367    3014 status.go:257] ha-674000-m04 status: &{Name:ha-674000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0311 13:31:12.580376    3014 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-674000 status -v=7 --alsologtostderr": ha-674000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-674000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-674000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-674000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-674000 status -v=7 --alsologtostderr": ha-674000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-674000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-674000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-674000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-674000 status -v=7 --alsologtostderr": ha-674000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-674000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-674000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-674000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-674000 -n ha-674000
E0311 13:31:14.201200    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-674000 -n ha-674000: exit status 3 (25.959143583s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 13:31:38.539392    3071 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0311 13:31:38.539404    3071 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-674000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMutliControlPlane/serial/StopSecondaryNode (214.16s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (103.28s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0311 13:31:41.904303    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m17.325301375s)
ha_test.go:413: expected profile "ha-674000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-674000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-674000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-674000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"
\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-674000 -n ha-674000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-674000 -n ha-674000: exit status 3 (25.957547292s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 13:33:21.818967    3102 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0311 13:33:21.818980    3102 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-674000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (103.28s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (209.15s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-674000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.093606959s)

                                                
                                                
-- stdout --
	* Starting "ha-674000-m02" control-plane node in "ha-674000" cluster
	* Restarting existing qemu2 VM for "ha-674000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-674000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:33:21.861083    3111 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:33:21.861312    3111 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:33:21.861315    3111 out.go:304] Setting ErrFile to fd 2...
	I0311 13:33:21.861318    3111 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:33:21.861443    3111 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:33:21.861695    3111 mustload.go:65] Loading cluster: ha-674000
	I0311 13:33:21.861932    3111 config.go:182] Loaded profile config "ha-674000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W0311 13:33:21.862162    3111 host.go:58] "ha-674000-m02" host status: Stopped
	I0311 13:33:21.866775    3111 out.go:177] * Starting "ha-674000-m02" control-plane node in "ha-674000" cluster
	I0311 13:33:21.869808    3111 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 13:33:21.869823    3111 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 13:33:21.869837    3111 cache.go:56] Caching tarball of preloaded images
	I0311 13:33:21.869943    3111 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 13:33:21.869966    3111 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 13:33:21.870082    3111 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/ha-674000/config.json ...
	I0311 13:33:21.870506    3111 start.go:360] acquireMachinesLock for ha-674000-m02: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 13:33:21.870542    3111 start.go:364] duration metric: took 23.75µs to acquireMachinesLock for "ha-674000-m02"
	I0311 13:33:21.870550    3111 start.go:96] Skipping create...Using existing machine configuration
	I0311 13:33:21.870556    3111 fix.go:54] fixHost starting: m02
	I0311 13:33:21.870658    3111 fix.go:112] recreateIfNeeded on ha-674000-m02: state=Stopped err=<nil>
	W0311 13:33:21.870664    3111 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 13:33:21.873678    3111 out.go:177] * Restarting existing qemu2 VM for "ha-674000-m02" ...
	I0311 13:33:21.877754    3111 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:5b:f5:b2:cf:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000-m02/disk.qcow2
	I0311 13:33:21.880425    3111 main.go:141] libmachine: STDOUT: 
	I0311 13:33:21.880445    3111 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 13:33:21.880469    3111 fix.go:56] duration metric: took 9.912792ms for fixHost
	I0311 13:33:21.880474    3111 start.go:83] releasing machines lock for "ha-674000-m02", held for 9.928ms
	W0311 13:33:21.880478    3111 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 13:33:21.880508    3111 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:33:21.880515    3111 start.go:728] Will try again in 5 seconds ...
	I0311 13:33:26.882485    3111 start.go:360] acquireMachinesLock for ha-674000-m02: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 13:33:26.882672    3111 start.go:364] duration metric: took 141.792µs to acquireMachinesLock for "ha-674000-m02"
	I0311 13:33:26.882720    3111 start.go:96] Skipping create...Using existing machine configuration
	I0311 13:33:26.882727    3111 fix.go:54] fixHost starting: m02
	I0311 13:33:26.882976    3111 fix.go:112] recreateIfNeeded on ha-674000-m02: state=Stopped err=<nil>
	W0311 13:33:26.882985    3111 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 13:33:26.886289    3111 out.go:177] * Restarting existing qemu2 VM for "ha-674000-m02" ...
	I0311 13:33:26.890381    3111 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:5b:f5:b2:cf:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000-m02/disk.qcow2
	I0311 13:33:26.893746    3111 main.go:141] libmachine: STDOUT: 
	I0311 13:33:26.893779    3111 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 13:33:26.893819    3111 fix.go:56] duration metric: took 11.092708ms for fixHost
	I0311 13:33:26.893827    3111 start.go:83] releasing machines lock for "ha-674000-m02", held for 11.147792ms
	W0311 13:33:26.893911    3111 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-674000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-674000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:33:26.898334    3111 out.go:177] 
	W0311 13:33:26.902290    3111 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 13:33:26.902299    3111 out.go:239] * 
	* 
	W0311 13:33:26.905432    3111 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 13:33:26.910348    3111 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0311 13:33:21.861083    3111 out.go:291] Setting OutFile to fd 1 ...
I0311 13:33:21.861312    3111 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:33:21.861315    3111 out.go:304] Setting ErrFile to fd 2...
I0311 13:33:21.861318    3111 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:33:21.861443    3111 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
I0311 13:33:21.861695    3111 mustload.go:65] Loading cluster: ha-674000
I0311 13:33:21.861932    3111 config.go:182] Loaded profile config "ha-674000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
W0311 13:33:21.862162    3111 host.go:58] "ha-674000-m02" host status: Stopped
I0311 13:33:21.866775    3111 out.go:177] * Starting "ha-674000-m02" control-plane node in "ha-674000" cluster
I0311 13:33:21.869808    3111 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0311 13:33:21.869823    3111 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
I0311 13:33:21.869837    3111 cache.go:56] Caching tarball of preloaded images
I0311 13:33:21.869943    3111 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0311 13:33:21.869966    3111 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I0311 13:33:21.870082    3111 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/ha-674000/config.json ...
I0311 13:33:21.870506    3111 start.go:360] acquireMachinesLock for ha-674000-m02: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0311 13:33:21.870542    3111 start.go:364] duration metric: took 23.75µs to acquireMachinesLock for "ha-674000-m02"
I0311 13:33:21.870550    3111 start.go:96] Skipping create...Using existing machine configuration
I0311 13:33:21.870556    3111 fix.go:54] fixHost starting: m02
I0311 13:33:21.870658    3111 fix.go:112] recreateIfNeeded on ha-674000-m02: state=Stopped err=<nil>
W0311 13:33:21.870664    3111 fix.go:138] unexpected machine state, will restart: <nil>
I0311 13:33:21.873678    3111 out.go:177] * Restarting existing qemu2 VM for "ha-674000-m02" ...
I0311 13:33:21.877754    3111 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:5b:f5:b2:cf:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000-m02/disk.qcow2
I0311 13:33:21.880425    3111 main.go:141] libmachine: STDOUT: 
I0311 13:33:21.880445    3111 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0311 13:33:21.880469    3111 fix.go:56] duration metric: took 9.912792ms for fixHost
I0311 13:33:21.880474    3111 start.go:83] releasing machines lock for "ha-674000-m02", held for 9.928ms
W0311 13:33:21.880478    3111 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0311 13:33:21.880508    3111 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0311 13:33:21.880515    3111 start.go:728] Will try again in 5 seconds ...
I0311 13:33:26.882485    3111 start.go:360] acquireMachinesLock for ha-674000-m02: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0311 13:33:26.882672    3111 start.go:364] duration metric: took 141.792µs to acquireMachinesLock for "ha-674000-m02"
I0311 13:33:26.882720    3111 start.go:96] Skipping create...Using existing machine configuration
I0311 13:33:26.882727    3111 fix.go:54] fixHost starting: m02
I0311 13:33:26.882976    3111 fix.go:112] recreateIfNeeded on ha-674000-m02: state=Stopped err=<nil>
W0311 13:33:26.882985    3111 fix.go:138] unexpected machine state, will restart: <nil>
I0311 13:33:26.886289    3111 out.go:177] * Restarting existing qemu2 VM for "ha-674000-m02" ...
I0311 13:33:26.890381    3111 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:5b:f5:b2:cf:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000-m02/disk.qcow2
I0311 13:33:26.893746    3111 main.go:141] libmachine: STDOUT: 
I0311 13:33:26.893779    3111 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0311 13:33:26.893819    3111 fix.go:56] duration metric: took 11.092708ms for fixHost
I0311 13:33:26.893827    3111 start.go:83] releasing machines lock for "ha-674000-m02", held for 11.147792ms
W0311 13:33:26.893911    3111 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-674000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-674000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0311 13:33:26.898334    3111 out.go:177] 
W0311 13:33:26.902290    3111 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0311 13:33:26.902299    3111 out.go:239] * 
* 
W0311 13:33:26.905432    3111 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0311 13:33:26.910348    3111 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-674000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 status -v=7 --alsologtostderr
E0311 13:34:29.069322    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
E0311 13:35:52.136511    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
E0311 13:36:14.191964    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-674000 status -v=7 --alsologtostderr: exit status 7 (2m58.057452208s)

                                                
                                                
-- stdout --
	ha-674000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-674000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-674000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-674000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:33:26.959891    3115 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:33:26.960034    3115 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:33:26.960037    3115 out.go:304] Setting ErrFile to fd 2...
	I0311 13:33:26.960040    3115 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:33:26.960165    3115 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:33:26.960290    3115 out.go:298] Setting JSON to false
	I0311 13:33:26.960303    3115 mustload.go:65] Loading cluster: ha-674000
	I0311 13:33:26.960344    3115 notify.go:220] Checking for updates...
	I0311 13:33:26.960538    3115 config.go:182] Loaded profile config "ha-674000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:33:26.960544    3115 status.go:255] checking status of ha-674000 ...
	I0311 13:33:26.961367    3115 status.go:330] ha-674000 host status = "Running" (err=<nil>)
	I0311 13:33:26.961375    3115 host.go:66] Checking if "ha-674000" exists ...
	I0311 13:33:26.961491    3115 host.go:66] Checking if "ha-674000" exists ...
	I0311 13:33:26.961616    3115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 13:33:26.961625    3115 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000/id_rsa Username:docker}
	W0311 13:33:26.961809    3115 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0311 13:33:26.961821    3115 retry.go:31] will retry after 197.42655ms: dial tcp 192.168.105.5:22: connect: host is down
	W0311 13:33:27.161472    3115 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0311 13:33:27.161517    3115 retry.go:31] will retry after 488.401137ms: dial tcp 192.168.105.5:22: connect: host is down
	W0311 13:33:27.651067    3115 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0311 13:33:27.651112    3115 retry.go:31] will retry after 362.069926ms: dial tcp 192.168.105.5:22: connect: host is down
	W0311 13:33:28.015692    3115 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0311 13:33:28.015785    3115 retry.go:31] will retry after 628.716761ms: dial tcp 192.168.105.5:22: connect: host is down
	W0311 13:33:28.645796    3115 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0311 13:33:28.645964    3115 retry.go:31] will retry after 194.630005ms: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0311 13:33:28.842795    3115 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000/id_rsa Username:docker}
	W0311 13:33:28.843880    3115 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0311 13:33:28.843918    3115 retry.go:31] will retry after 182.592684ms: dial tcp 192.168.105.5:22: connect: host is down
	W0311 13:33:54.953056    3115 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0311 13:33:54.953106    3115 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0311 13:33:54.953114    3115 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0311 13:33:54.953118    3115 status.go:257] ha-674000 status: &{Name:ha-674000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0311 13:33:54.953135    3115 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0311 13:33:54.953139    3115 status.go:255] checking status of ha-674000-m02 ...
	I0311 13:33:54.953338    3115 status.go:330] ha-674000-m02 host status = "Stopped" (err=<nil>)
	I0311 13:33:54.953343    3115 status.go:343] host is not running, skipping remaining checks
	I0311 13:33:54.953345    3115 status.go:257] ha-674000-m02 status: &{Name:ha-674000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:33:54.953350    3115 status.go:255] checking status of ha-674000-m03 ...
	I0311 13:33:54.954080    3115 status.go:330] ha-674000-m03 host status = "Running" (err=<nil>)
	I0311 13:33:54.954087    3115 host.go:66] Checking if "ha-674000-m03" exists ...
	I0311 13:33:54.954200    3115 host.go:66] Checking if "ha-674000-m03" exists ...
	I0311 13:33:54.954329    3115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 13:33:54.954339    3115 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000-m03/id_rsa Username:docker}
	W0311 13:35:09.955334    3115 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0311 13:35:09.955567    3115 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0311 13:35:09.955645    3115 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0311 13:35:09.955666    3115 status.go:257] ha-674000-m03 status: &{Name:ha-674000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0311 13:35:09.955711    3115 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0311 13:35:09.955766    3115 status.go:255] checking status of ha-674000-m04 ...
	I0311 13:35:09.959232    3115 status.go:330] ha-674000-m04 host status = "Running" (err=<nil>)
	I0311 13:35:09.959261    3115 host.go:66] Checking if "ha-674000-m04" exists ...
	I0311 13:35:09.959758    3115 host.go:66] Checking if "ha-674000-m04" exists ...
	I0311 13:35:09.960321    3115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 13:35:09.960362    3115 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000-m04/id_rsa Username:docker}
	W0311 13:36:24.960972    3115 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0311 13:36:24.961161    3115 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0311 13:36:24.961205    3115 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0311 13:36:24.961222    3115 status.go:257] ha-674000-m04 status: &{Name:ha-674000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0311 13:36:24.961273    3115 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-674000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-674000 -n ha-674000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-674000 -n ha-674000: exit status 3 (25.9981805s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0311 13:36:50.959016    3144 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0311 13:36:50.959116    3144 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-674000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMutliControlPlane/serial/RestartSecondaryNode (209.15s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartClusterKeepsNodes (234.39s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-674000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-674000 -v=7 --alsologtostderr
E0311 13:39:29.061232    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
E0311 13:41:14.184486    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-674000 -v=7 --alsologtostderr: (3m49.021389s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-674000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-674000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.22791525s)

                                                
                                                
-- stdout --
	* [ha-674000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-674000" primary control-plane node in "ha-674000" cluster
	* Restarting existing qemu2 VM for "ha-674000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-674000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:42:00.705083    3279 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:42:00.705274    3279 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:42:00.705278    3279 out.go:304] Setting ErrFile to fd 2...
	I0311 13:42:00.705281    3279 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:42:00.705446    3279 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:42:00.706766    3279 out.go:298] Setting JSON to false
	I0311 13:42:00.726832    3279 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2491,"bootTime":1710187229,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 13:42:00.726922    3279 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 13:42:00.731640    3279 out.go:177] * [ha-674000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 13:42:00.739670    3279 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 13:42:00.739719    3279 notify.go:220] Checking for updates...
	I0311 13:42:00.741397    3279 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:42:00.744583    3279 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 13:42:00.747639    3279 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:42:00.750635    3279 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 13:42:00.753661    3279 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 13:42:00.756943    3279 config.go:182] Loaded profile config "ha-674000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:42:00.756995    3279 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:42:00.761613    3279 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 13:42:00.768628    3279 start.go:297] selected driver: qemu2
	I0311 13:42:00.768635    3279 start.go:901] validating driver "qemu2" against &{Name:ha-674000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.28.4 ClusterName:ha-674000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:42:00.768721    3279 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 13:42:00.771643    3279 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 13:42:00.771667    3279 cni.go:84] Creating CNI manager for ""
	I0311 13:42:00.771675    3279 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0311 13:42:00.771725    3279 start.go:340] cluster config:
	{Name:ha-674000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-674000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:42:00.777008    3279 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:42:00.785585    3279 out.go:177] * Starting "ha-674000" primary control-plane node in "ha-674000" cluster
	I0311 13:42:00.788660    3279 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 13:42:00.788677    3279 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 13:42:00.788688    3279 cache.go:56] Caching tarball of preloaded images
	I0311 13:42:00.788762    3279 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 13:42:00.788768    3279 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 13:42:00.788851    3279 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/ha-674000/config.json ...
	I0311 13:42:00.789314    3279 start.go:360] acquireMachinesLock for ha-674000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 13:42:00.789348    3279 start.go:364] duration metric: took 27.208µs to acquireMachinesLock for "ha-674000"
	I0311 13:42:00.789358    3279 start.go:96] Skipping create...Using existing machine configuration
	I0311 13:42:00.789364    3279 fix.go:54] fixHost starting: 
	I0311 13:42:00.789479    3279 fix.go:112] recreateIfNeeded on ha-674000: state=Stopped err=<nil>
	W0311 13:42:00.789488    3279 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 13:42:00.793591    3279 out.go:177] * Restarting existing qemu2 VM for "ha-674000" ...
	I0311 13:42:00.801578    3279 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:34:65:f2:bb:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000/disk.qcow2
	I0311 13:42:00.803698    3279 main.go:141] libmachine: STDOUT: 
	I0311 13:42:00.803719    3279 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 13:42:00.803752    3279 fix.go:56] duration metric: took 14.388417ms for fixHost
	I0311 13:42:00.803757    3279 start.go:83] releasing machines lock for "ha-674000", held for 14.405458ms
	W0311 13:42:00.803764    3279 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 13:42:00.803803    3279 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:42:00.803807    3279 start.go:728] Will try again in 5 seconds ...
	I0311 13:42:05.805921    3279 start.go:360] acquireMachinesLock for ha-674000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 13:42:05.806237    3279 start.go:364] duration metric: took 255.542µs to acquireMachinesLock for "ha-674000"
	I0311 13:42:05.806360    3279 start.go:96] Skipping create...Using existing machine configuration
	I0311 13:42:05.806381    3279 fix.go:54] fixHost starting: 
	I0311 13:42:05.807020    3279 fix.go:112] recreateIfNeeded on ha-674000: state=Stopped err=<nil>
	W0311 13:42:05.807046    3279 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 13:42:05.812529    3279 out.go:177] * Restarting existing qemu2 VM for "ha-674000" ...
	I0311 13:42:05.817625    3279 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:34:65:f2:bb:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000/disk.qcow2
	I0311 13:42:05.827328    3279 main.go:141] libmachine: STDOUT: 
	I0311 13:42:05.827394    3279 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 13:42:05.827464    3279 fix.go:56] duration metric: took 21.085166ms for fixHost
	I0311 13:42:05.827487    3279 start.go:83] releasing machines lock for "ha-674000", held for 21.223792ms
	W0311 13:42:05.827660    3279 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-674000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-674000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:42:05.836385    3279 out.go:177] 
	W0311 13:42:05.840500    3279 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 13:42:05.840536    3279 out.go:239] * 
	* 
	W0311 13:42:05.842975    3279 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 13:42:05.854419    3279 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-674000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-674000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-674000 -n ha-674000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-674000 -n ha-674000: exit status 7 (34.602958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-674000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/RestartClusterKeepsNodes (234.39s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-674000 node delete m03 -v=7 --alsologtostderr: exit status 83 (42.60475ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-674000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-674000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:42:06.001118    3292 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:42:06.001357    3292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:42:06.001361    3292 out.go:304] Setting ErrFile to fd 2...
	I0311 13:42:06.001362    3292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:42:06.001492    3292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:42:06.001731    3292 mustload.go:65] Loading cluster: ha-674000
	I0311 13:42:06.001943    3292 config.go:182] Loaded profile config "ha-674000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W0311 13:42:06.002242    3292 out.go:239] ! The control-plane node ha-674000 host is not running (will try others): state=Stopped
	! The control-plane node ha-674000 host is not running (will try others): state=Stopped
	W0311 13:42:06.002350    3292 out.go:239] ! The control-plane node ha-674000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-674000-m02 host is not running (will try others): state=Stopped
	I0311 13:42:06.006732    3292 out.go:177] * The control-plane node ha-674000-m03 host is not running: state=Stopped
	I0311 13:42:06.009609    3292 out.go:177]   To start a cluster, run: "minikube start -p ha-674000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-674000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-674000 status -v=7 --alsologtostderr: exit status 7 (32.4525ms)

                                                
                                                
-- stdout --
	ha-674000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-674000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-674000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-674000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:42:06.043918    3294 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:42:06.044057    3294 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:42:06.044061    3294 out.go:304] Setting ErrFile to fd 2...
	I0311 13:42:06.044066    3294 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:42:06.044193    3294 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:42:06.044320    3294 out.go:298] Setting JSON to false
	I0311 13:42:06.044332    3294 mustload.go:65] Loading cluster: ha-674000
	I0311 13:42:06.044399    3294 notify.go:220] Checking for updates...
	I0311 13:42:06.044551    3294 config.go:182] Loaded profile config "ha-674000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:42:06.044557    3294 status.go:255] checking status of ha-674000 ...
	I0311 13:42:06.044776    3294 status.go:330] ha-674000 host status = "Stopped" (err=<nil>)
	I0311 13:42:06.044780    3294 status.go:343] host is not running, skipping remaining checks
	I0311 13:42:06.044782    3294 status.go:257] ha-674000 status: &{Name:ha-674000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:42:06.044793    3294 status.go:255] checking status of ha-674000-m02 ...
	I0311 13:42:06.044879    3294 status.go:330] ha-674000-m02 host status = "Stopped" (err=<nil>)
	I0311 13:42:06.044882    3294 status.go:343] host is not running, skipping remaining checks
	I0311 13:42:06.044884    3294 status.go:257] ha-674000-m02 status: &{Name:ha-674000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:42:06.044888    3294 status.go:255] checking status of ha-674000-m03 ...
	I0311 13:42:06.044982    3294 status.go:330] ha-674000-m03 host status = "Stopped" (err=<nil>)
	I0311 13:42:06.044984    3294 status.go:343] host is not running, skipping remaining checks
	I0311 13:42:06.044986    3294 status.go:257] ha-674000-m03 status: &{Name:ha-674000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:42:06.044990    3294 status.go:255] checking status of ha-674000-m04 ...
	I0311 13:42:06.045082    3294 status.go:330] ha-674000-m04 host status = "Stopped" (err=<nil>)
	I0311 13:42:06.045085    3294 status.go:343] host is not running, skipping remaining checks
	I0311 13:42:06.045087    3294 status.go:257] ha-674000-m04 status: &{Name:ha-674000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-674000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-674000 -n ha-674000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-674000 -n ha-674000: exit status 7 (32.321834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-674000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (2s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1.941341s)
ha_test.go:413: expected profile "ha-674000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-674000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-674000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-674000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountI
P\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-674000 -n ha-674000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-674000 -n ha-674000: exit status 7 (56.596791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-674000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.00s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopCluster (251.16s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 stop -v=7 --alsologtostderr
E0311 13:42:37.247580    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
E0311 13:44:29.051336    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
E0311 13:46:14.176440    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-674000 stop -v=7 --alsologtostderr: (4m11.059555167s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-674000 status -v=7 --alsologtostderr: exit status 7 (70.301542ms)

                                                
                                                
-- stdout --
	ha-674000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-674000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-674000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-674000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:46:19.197733    3375 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:46:19.197954    3375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:46:19.197958    3375 out.go:304] Setting ErrFile to fd 2...
	I0311 13:46:19.197960    3375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:46:19.198128    3375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:46:19.198304    3375 out.go:298] Setting JSON to false
	I0311 13:46:19.198328    3375 mustload.go:65] Loading cluster: ha-674000
	I0311 13:46:19.198359    3375 notify.go:220] Checking for updates...
	I0311 13:46:19.198606    3375 config.go:182] Loaded profile config "ha-674000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:46:19.198615    3375 status.go:255] checking status of ha-674000 ...
	I0311 13:46:19.198866    3375 status.go:330] ha-674000 host status = "Stopped" (err=<nil>)
	I0311 13:46:19.198870    3375 status.go:343] host is not running, skipping remaining checks
	I0311 13:46:19.198873    3375 status.go:257] ha-674000 status: &{Name:ha-674000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:46:19.198884    3375 status.go:255] checking status of ha-674000-m02 ...
	I0311 13:46:19.199002    3375 status.go:330] ha-674000-m02 host status = "Stopped" (err=<nil>)
	I0311 13:46:19.199005    3375 status.go:343] host is not running, skipping remaining checks
	I0311 13:46:19.199008    3375 status.go:257] ha-674000-m02 status: &{Name:ha-674000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:46:19.199013    3375 status.go:255] checking status of ha-674000-m03 ...
	I0311 13:46:19.199132    3375 status.go:330] ha-674000-m03 host status = "Stopped" (err=<nil>)
	I0311 13:46:19.199135    3375 status.go:343] host is not running, skipping remaining checks
	I0311 13:46:19.199137    3375 status.go:257] ha-674000-m03 status: &{Name:ha-674000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 13:46:19.199142    3375 status.go:255] checking status of ha-674000-m04 ...
	I0311 13:46:19.199256    3375 status.go:330] ha-674000-m04 host status = "Stopped" (err=<nil>)
	I0311 13:46:19.199259    3375 status.go:343] host is not running, skipping remaining checks
	I0311 13:46:19.199262    3375 status.go:257] ha-674000-m04 status: &{Name:ha-674000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-674000 status -v=7 --alsologtostderr": ha-674000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-674000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-674000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-674000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-674000 status -v=7 --alsologtostderr": ha-674000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-674000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-674000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-674000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-674000 status -v=7 --alsologtostderr": ha-674000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-674000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-674000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-674000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-674000 -n ha-674000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-674000 -n ha-674000: exit status 7 (34.514ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-674000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/StopCluster (251.16s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-674000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-674000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.189811292s)

                                                
                                                
-- stdout --
	* [ha-674000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-674000" primary control-plane node in "ha-674000" cluster
	* Restarting existing qemu2 VM for "ha-674000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-674000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:46:19.265295    3379 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:46:19.265421    3379 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:46:19.265424    3379 out.go:304] Setting ErrFile to fd 2...
	I0311 13:46:19.265427    3379 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:46:19.265547    3379 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:46:19.266498    3379 out.go:298] Setting JSON to false
	I0311 13:46:19.282440    3379 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2750,"bootTime":1710187229,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 13:46:19.282505    3379 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 13:46:19.287355    3379 out.go:177] * [ha-674000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 13:46:19.293276    3379 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 13:46:19.297284    3379 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:46:19.293368    3379 notify.go:220] Checking for updates...
	I0311 13:46:19.304247    3379 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 13:46:19.307288    3379 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:46:19.310275    3379 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 13:46:19.313275    3379 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 13:46:19.316659    3379 config.go:182] Loaded profile config "ha-674000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:46:19.316903    3379 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:46:19.321227    3379 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 13:46:19.328287    3379 start.go:297] selected driver: qemu2
	I0311 13:46:19.328294    3379 start.go:901] validating driver "qemu2" against &{Name:ha-674000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.28.4 ClusterName:ha-674000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:46:19.328384    3379 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 13:46:19.330605    3379 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 13:46:19.330634    3379 cni.go:84] Creating CNI manager for ""
	I0311 13:46:19.330639    3379 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0311 13:46:19.330687    3379 start.go:340] cluster config:
	{Name:ha-674000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-674000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:46:19.334911    3379 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:46:19.343257    3379 out.go:177] * Starting "ha-674000" primary control-plane node in "ha-674000" cluster
	I0311 13:46:19.347111    3379 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 13:46:19.347122    3379 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 13:46:19.347131    3379 cache.go:56] Caching tarball of preloaded images
	I0311 13:46:19.347178    3379 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 13:46:19.347183    3379 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 13:46:19.347249    3379 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/ha-674000/config.json ...
	I0311 13:46:19.347706    3379 start.go:360] acquireMachinesLock for ha-674000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 13:46:19.347736    3379 start.go:364] duration metric: took 23.958µs to acquireMachinesLock for "ha-674000"
	I0311 13:46:19.347745    3379 start.go:96] Skipping create...Using existing machine configuration
	I0311 13:46:19.347751    3379 fix.go:54] fixHost starting: 
	I0311 13:46:19.347857    3379 fix.go:112] recreateIfNeeded on ha-674000: state=Stopped err=<nil>
	W0311 13:46:19.347864    3379 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 13:46:19.352276    3379 out.go:177] * Restarting existing qemu2 VM for "ha-674000" ...
	I0311 13:46:19.359290    3379 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:34:65:f2:bb:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000/disk.qcow2
	I0311 13:46:19.361220    3379 main.go:141] libmachine: STDOUT: 
	I0311 13:46:19.361241    3379 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 13:46:19.361273    3379 fix.go:56] duration metric: took 13.522708ms for fixHost
	I0311 13:46:19.361278    3379 start.go:83] releasing machines lock for "ha-674000", held for 13.538541ms
	W0311 13:46:19.361283    3379 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 13:46:19.361310    3379 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:46:19.361315    3379 start.go:728] Will try again in 5 seconds ...
	I0311 13:46:24.363456    3379 start.go:360] acquireMachinesLock for ha-674000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 13:46:24.363782    3379 start.go:364] duration metric: took 243.708µs to acquireMachinesLock for "ha-674000"
	I0311 13:46:24.363895    3379 start.go:96] Skipping create...Using existing machine configuration
	I0311 13:46:24.363918    3379 fix.go:54] fixHost starting: 
	I0311 13:46:24.364602    3379 fix.go:112] recreateIfNeeded on ha-674000: state=Stopped err=<nil>
	W0311 13:46:24.364626    3379 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 13:46:24.370175    3379 out.go:177] * Restarting existing qemu2 VM for "ha-674000" ...
	I0311 13:46:24.378258    3379 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:34:65:f2:bb:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/ha-674000/disk.qcow2
	I0311 13:46:24.388155    3379 main.go:141] libmachine: STDOUT: 
	I0311 13:46:24.388231    3379 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 13:46:24.388302    3379 fix.go:56] duration metric: took 24.386541ms for fixHost
	I0311 13:46:24.388322    3379 start.go:83] releasing machines lock for "ha-674000", held for 24.516833ms
	W0311 13:46:24.388551    3379 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-674000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-674000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:46:24.396034    3379 out.go:177] 
	W0311 13:46:24.400197    3379 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 13:46:24.400231    3379 out.go:239] * 
	* 
	W0311 13:46:24.403133    3379 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 13:46:24.414986    3379 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-674000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-674000 -n ha-674000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-674000 -n ha-674000: exit status 7 (69.438416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-674000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-674000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-674000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-674000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-674000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountI
P\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-674000 -n ha-674000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-674000 -n ha-674000: exit status 7 (31.670042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-674000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-674000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-674000 --control-plane -v=7 --alsologtostderr: exit status 83 (44.735125ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-674000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-674000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:46:24.637258    3397 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:46:24.637642    3397 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:46:24.637645    3397 out.go:304] Setting ErrFile to fd 2...
	I0311 13:46:24.637647    3397 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:46:24.637795    3397 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:46:24.638049    3397 mustload.go:65] Loading cluster: ha-674000
	I0311 13:46:24.638256    3397 config.go:182] Loaded profile config "ha-674000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W0311 13:46:24.638538    3397 out.go:239] ! The control-plane node ha-674000 host is not running (will try others): state=Stopped
	! The control-plane node ha-674000 host is not running (will try others): state=Stopped
	W0311 13:46:24.638644    3397 out.go:239] ! The control-plane node ha-674000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-674000-m02 host is not running (will try others): state=Stopped
	I0311 13:46:24.642491    3397 out.go:177] * The control-plane node ha-674000-m03 host is not running: state=Stopped
	I0311 13:46:24.646365    3397 out.go:177]   To start a cluster, run: "minikube start -p ha-674000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-674000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-674000 -n ha-674000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-674000 -n ha-674000: exit status 7 (31.769166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-674000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.27s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-285000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-285000 --driver=qemu2 : exit status 80 (10.198769625s)

                                                
                                                
-- stdout --
	* [image-285000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-285000" primary control-plane node in "image-285000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-285000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-285000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-285000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-285000 -n image-285000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-285000 -n image-285000: exit status 7 (69.285584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-285000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.27s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-825000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-825000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.7908575s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0b7fad25-22f9-46e9-8cae-bb10bb400634","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-825000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e31eb69-3fc3-490b-8a3e-a4ee26f370b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18358"}}
	{"specversion":"1.0","id":"ef47f0eb-64f2-4b5c-9777-16098a09d30a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig"}}
	{"specversion":"1.0","id":"e5a5b65a-1619-4022-9d5c-be6c44b02b4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"b82fb291-b846-40be-a376-fe1da5daf36b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"74385e7c-55de-4565-9bce-73e7c6034131","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube"}}
	{"specversion":"1.0","id":"f89657f6-be37-4e23-aad0-33681d7aca1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d748847d-87eb-45e9-8457-fd8b6b81245c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2b164a7d-12bd-4d06-86b1-b5b068f103cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"8e47e46b-5ebe-4472-a890-7428655f6e1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-825000\" primary control-plane node in \"json-output-825000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e1dd8836-2a01-43fd-b29c-1c71146ed83d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"70264839-c9fc-41a0-9e0c-08f63066f6d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-825000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"2f267c5c-0181-415c-a89e-8781a6d1c273","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"25ae2899-08e9-4301-934e-9d3a0a7f5f38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"b23acfae-23ca-4f1f-911f-94b6a465934a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-825000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"49374ad0-01e7-4ac3-8df5-29c153e166f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"f424fd16-c4dc-462f-879b-66fb646a300c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-825000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.07s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-825000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-825000 --output=json --user=testUser: exit status 83 (74.507125ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"db0f02c3-3cf4-4642-8ab6-cb59fcd6f51a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-825000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"8a4ee213-a736-4632-b30d-2bdbc7c09e50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-825000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-825000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.07s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-825000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-825000 --output=json --user=testUser: exit status 83 (44.156333ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-825000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-825000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-825000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-825000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.27s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-406000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-406000 --driver=qemu2 : exit status 80 (9.837875834s)

                                                
                                                
-- stdout --
	* [first-406000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-406000" primary control-plane node in "first-406000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-406000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-406000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-406000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-11 13:46:59.338922 -0700 PDT m=+2242.637256543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-408000 -n second-408000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-408000 -n second-408000: exit status 85 (80.413333ms)

                                                
                                                
-- stdout --
	* Profile "second-408000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-408000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-408000" host is not running, skipping log retrieval (state="* Profile \"second-408000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-408000\"")
helpers_test.go:175: Cleaning up "second-408000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-408000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-11 13:46:59.644302 -0700 PDT m=+2242.942943168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-406000 -n first-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-406000 -n first-406000: exit status 7 (32.495959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-406000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-406000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-406000
--- FAIL: TestMinikubeProfile (10.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-107000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-107000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.592512209s)

                                                
                                                
-- stdout --
	* [mount-start-1-107000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-107000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-107000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-107000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-107000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-107000 -n mount-start-1-107000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-107000 -n mount-start-1-107000: exit status 7 (69.046292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-107000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.66s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-457000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-457000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.950789291s)

                                                
                                                
-- stdout --
	* [multinode-457000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-457000" primary control-plane node in "multinode-457000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-457000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:47:10.793949    3563 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:47:10.794064    3563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:47:10.794067    3563 out.go:304] Setting ErrFile to fd 2...
	I0311 13:47:10.794069    3563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:47:10.794186    3563 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:47:10.795273    3563 out.go:298] Setting JSON to false
	I0311 13:47:10.811240    3563 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2801,"bootTime":1710187229,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 13:47:10.811302    3563 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 13:47:10.817349    3563 out.go:177] * [multinode-457000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 13:47:10.824342    3563 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 13:47:10.828416    3563 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:47:10.824399    3563 notify.go:220] Checking for updates...
	I0311 13:47:10.834330    3563 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 13:47:10.837401    3563 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:47:10.840313    3563 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 13:47:10.843349    3563 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 13:47:10.846510    3563 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:47:10.850318    3563 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 13:47:10.857327    3563 start.go:297] selected driver: qemu2
	I0311 13:47:10.857333    3563 start.go:901] validating driver "qemu2" against <nil>
	I0311 13:47:10.857341    3563 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 13:47:10.859594    3563 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 13:47:10.863267    3563 out.go:177] * Automatically selected the socket_vmnet network
	I0311 13:47:10.866471    3563 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 13:47:10.866498    3563 cni.go:84] Creating CNI manager for ""
	I0311 13:47:10.866503    3563 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0311 13:47:10.866515    3563 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0311 13:47:10.866548    3563 start.go:340] cluster config:
	{Name:multinode-457000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-457000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:47:10.871056    3563 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:47:10.878353    3563 out.go:177] * Starting "multinode-457000" primary control-plane node in "multinode-457000" cluster
	I0311 13:47:10.882143    3563 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 13:47:10.882163    3563 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 13:47:10.882173    3563 cache.go:56] Caching tarball of preloaded images
	I0311 13:47:10.882230    3563 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 13:47:10.882236    3563 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 13:47:10.882456    3563 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/multinode-457000/config.json ...
	I0311 13:47:10.882468    3563 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/multinode-457000/config.json: {Name:mkf876ae1848bc769b45c9ae53f7bc6a6aabe0f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:47:10.882677    3563 start.go:360] acquireMachinesLock for multinode-457000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 13:47:10.882709    3563 start.go:364] duration metric: took 26.708µs to acquireMachinesLock for "multinode-457000"
	I0311 13:47:10.882721    3563 start.go:93] Provisioning new machine with config: &{Name:multinode-457000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:multinode-457000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 13:47:10.882754    3563 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 13:47:10.891346    3563 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 13:47:10.909141    3563 start.go:159] libmachine.API.Create for "multinode-457000" (driver="qemu2")
	I0311 13:47:10.909170    3563 client.go:168] LocalClient.Create starting
	I0311 13:47:10.909224    3563 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 13:47:10.909255    3563 main.go:141] libmachine: Decoding PEM data...
	I0311 13:47:10.909277    3563 main.go:141] libmachine: Parsing certificate...
	I0311 13:47:10.909319    3563 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 13:47:10.909341    3563 main.go:141] libmachine: Decoding PEM data...
	I0311 13:47:10.909348    3563 main.go:141] libmachine: Parsing certificate...
	I0311 13:47:10.909750    3563 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 13:47:11.044509    3563 main.go:141] libmachine: Creating SSH key...
	I0311 13:47:11.187440    3563 main.go:141] libmachine: Creating Disk image...
	I0311 13:47:11.187449    3563 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 13:47:11.187609    3563 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/disk.qcow2
	I0311 13:47:11.199705    3563 main.go:141] libmachine: STDOUT: 
	I0311 13:47:11.199724    3563 main.go:141] libmachine: STDERR: 
	I0311 13:47:11.199774    3563 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/disk.qcow2 +20000M
	I0311 13:47:11.210273    3563 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 13:47:11.210292    3563 main.go:141] libmachine: STDERR: 
	I0311 13:47:11.210304    3563 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/disk.qcow2
	I0311 13:47:11.210309    3563 main.go:141] libmachine: Starting QEMU VM...
	I0311 13:47:11.210336    3563 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:c0:f9:88:30:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/disk.qcow2
	I0311 13:47:11.212090    3563 main.go:141] libmachine: STDOUT: 
	I0311 13:47:11.212106    3563 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 13:47:11.212125    3563 client.go:171] duration metric: took 303.1005ms to LocalClient.Create
	I0311 13:47:13.213441    3563 start.go:128] duration metric: took 2.33175175s to createHost
	I0311 13:47:13.213498    3563 start.go:83] releasing machines lock for "multinode-457000", held for 2.331875208s
	W0311 13:47:13.213553    3563 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:47:13.224607    3563 out.go:177] * Deleting "multinode-457000" in qemu2 ...
	W0311 13:47:13.251910    3563 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:47:13.251937    3563 start.go:728] Will try again in 5 seconds ...
	I0311 13:47:18.252281    3563 start.go:360] acquireMachinesLock for multinode-457000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 13:47:18.252684    3563 start.go:364] duration metric: took 309.625µs to acquireMachinesLock for "multinode-457000"
	I0311 13:47:18.252837    3563 start.go:93] Provisioning new machine with config: &{Name:multinode-457000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:multinode-457000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 13:47:18.253237    3563 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 13:47:18.262822    3563 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 13:47:18.310570    3563 start.go:159] libmachine.API.Create for "multinode-457000" (driver="qemu2")
	I0311 13:47:18.310624    3563 client.go:168] LocalClient.Create starting
	I0311 13:47:18.310744    3563 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 13:47:18.310817    3563 main.go:141] libmachine: Decoding PEM data...
	I0311 13:47:18.310834    3563 main.go:141] libmachine: Parsing certificate...
	I0311 13:47:18.310888    3563 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 13:47:18.310929    3563 main.go:141] libmachine: Decoding PEM data...
	I0311 13:47:18.310942    3563 main.go:141] libmachine: Parsing certificate...
	I0311 13:47:18.311455    3563 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 13:47:18.459770    3563 main.go:141] libmachine: Creating SSH key...
	I0311 13:47:18.631660    3563 main.go:141] libmachine: Creating Disk image...
	I0311 13:47:18.631673    3563 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 13:47:18.631853    3563 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/disk.qcow2
	I0311 13:47:18.644218    3563 main.go:141] libmachine: STDOUT: 
	I0311 13:47:18.644237    3563 main.go:141] libmachine: STDERR: 
	I0311 13:47:18.644308    3563 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/disk.qcow2 +20000M
	I0311 13:47:18.654890    3563 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 13:47:18.654907    3563 main.go:141] libmachine: STDERR: 
	I0311 13:47:18.654921    3563 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/disk.qcow2
	I0311 13:47:18.654925    3563 main.go:141] libmachine: Starting QEMU VM...
	I0311 13:47:18.654978    3563 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:69:d4:92:96:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/disk.qcow2
	I0311 13:47:18.656697    3563 main.go:141] libmachine: STDOUT: 
	I0311 13:47:18.656723    3563 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 13:47:18.656740    3563 client.go:171] duration metric: took 346.221333ms to LocalClient.Create
	I0311 13:47:20.658321    3563 start.go:128] duration metric: took 2.405781334s to createHost
	I0311 13:47:20.658384    3563 start.go:83] releasing machines lock for "multinode-457000", held for 2.406400792s
	W0311 13:47:20.658784    3563 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-457000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-457000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:47:20.673483    3563 out.go:177] 
	W0311 13:47:20.676588    3563 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 13:47:20.676612    3563 out.go:239] * 
	* 
	W0311 13:47:20.679071    3563 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 13:47:20.691427    3563 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-457000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000: exit status 7 (68.436167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (117.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-457000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-457000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (122.997584ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-457000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-457000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-457000 -- rollout status deployment/busybox: exit status 1 (59.700542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-457000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (59.447833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-457000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.513541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-457000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.113791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-457000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.444333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-457000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.388041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-457000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.294625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-457000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.816125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-457000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.319667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-457000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.396459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-457000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.29775ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-457000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.253708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-457000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.05825ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-457000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.939083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-457000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-457000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-457000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.915875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-457000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-457000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-457000 -- exec  -- nslookup kubernetes.default: exit status 1 (58.819125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-457000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-457000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-457000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.04525ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-457000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000: exit status 7 (32.340541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (117.68s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-457000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.903292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-457000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000: exit status 7 (32.208208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-457000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-457000 -v 3 --alsologtostderr: exit status 83 (43.453667ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-457000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-457000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:49:18.572250    3657 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:49:18.572424    3657 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:18.572427    3657 out.go:304] Setting ErrFile to fd 2...
	I0311 13:49:18.572429    3657 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:18.572550    3657 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:49:18.572787    3657 mustload.go:65] Loading cluster: multinode-457000
	I0311 13:49:18.572968    3657 config.go:182] Loaded profile config "multinode-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:49:18.577624    3657 out.go:177] * The control-plane node multinode-457000 host is not running: state=Stopped
	I0311 13:49:18.580736    3657 out.go:177]   To start a cluster, run: "minikube start -p multinode-457000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-457000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000: exit status 7 (31.967584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-457000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-457000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (31.86725ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-457000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-457000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-457000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000: exit status 7 (32.1135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-457000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-457000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-457000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-457000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000: exit status 7 (31.431291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-457000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-457000 status --output json --alsologtostderr: exit status 7 (32.253167ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-457000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:49:18.817436    3670 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:49:18.817644    3670 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:18.817647    3670 out.go:304] Setting ErrFile to fd 2...
	I0311 13:49:18.817650    3670 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:18.817780    3670 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:49:18.817908    3670 out.go:298] Setting JSON to true
	I0311 13:49:18.817923    3670 mustload.go:65] Loading cluster: multinode-457000
	I0311 13:49:18.817975    3670 notify.go:220] Checking for updates...
	I0311 13:49:18.818117    3670 config.go:182] Loaded profile config "multinode-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:49:18.818123    3670 status.go:255] checking status of multinode-457000 ...
	I0311 13:49:18.818332    3670 status.go:330] multinode-457000 host status = "Stopped" (err=<nil>)
	I0311 13:49:18.818336    3670 status.go:343] host is not running, skipping remaining checks
	I0311 13:49:18.818338    3670 status.go:257] multinode-457000 status: &{Name:multinode-457000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-457000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000: exit status 7 (32.127292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-457000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-457000 node stop m03: exit status 85 (49.06475ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-457000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-457000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-457000 status: exit status 7 (32.081625ms)

                                                
                                                
-- stdout --
	multinode-457000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-457000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-457000 status --alsologtostderr: exit status 7 (31.519917ms)

                                                
                                                
-- stdout --
	multinode-457000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:49:18.963177    3678 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:49:18.963334    3678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:18.963338    3678 out.go:304] Setting ErrFile to fd 2...
	I0311 13:49:18.963340    3678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:18.963461    3678 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:49:18.963584    3678 out.go:298] Setting JSON to false
	I0311 13:49:18.963598    3678 mustload.go:65] Loading cluster: multinode-457000
	I0311 13:49:18.963657    3678 notify.go:220] Checking for updates...
	I0311 13:49:18.963789    3678 config.go:182] Loaded profile config "multinode-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:49:18.963794    3678 status.go:255] checking status of multinode-457000 ...
	I0311 13:49:18.963998    3678 status.go:330] multinode-457000 host status = "Stopped" (err=<nil>)
	I0311 13:49:18.964002    3678 status.go:343] host is not running, skipping remaining checks
	I0311 13:49:18.964004    3678 status.go:257] multinode-457000 status: &{Name:multinode-457000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-457000 status --alsologtostderr": multinode-457000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000: exit status 7 (31.967125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (51.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-457000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-457000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.007042ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:49:19.027168    3682 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:49:19.027388    3682 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:19.027392    3682 out.go:304] Setting ErrFile to fd 2...
	I0311 13:49:19.027394    3682 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:19.027513    3682 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:49:19.027741    3682 mustload.go:65] Loading cluster: multinode-457000
	I0311 13:49:19.027925    3682 config.go:182] Loaded profile config "multinode-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:49:19.031305    3682 out.go:177] 
	W0311 13:49:19.034244    3682 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0311 13:49:19.034248    3682 out.go:239] * 
	* 
	W0311 13:49:19.035798    3682 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 13:49:19.039090    3682 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0311 13:49:19.027168    3682 out.go:291] Setting OutFile to fd 1 ...
I0311 13:49:19.027388    3682 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:49:19.027392    3682 out.go:304] Setting ErrFile to fd 2...
I0311 13:49:19.027394    3682 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:49:19.027513    3682 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
I0311 13:49:19.027741    3682 mustload.go:65] Loading cluster: multinode-457000
I0311 13:49:19.027925    3682 config.go:182] Loaded profile config "multinode-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 13:49:19.031305    3682 out.go:177] 
W0311 13:49:19.034244    3682 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0311 13:49:19.034248    3682 out.go:239] * 
* 
W0311 13:49:19.035798    3682 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0311 13:49:19.039090    3682 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-457000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-457000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-457000 status -v=7 --alsologtostderr: exit status 7 (32.161542ms)

                                                
                                                
-- stdout --
	multinode-457000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:49:19.074759    3684 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:49:19.074899    3684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:19.074902    3684 out.go:304] Setting ErrFile to fd 2...
	I0311 13:49:19.074904    3684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:19.075022    3684 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:49:19.075150    3684 out.go:298] Setting JSON to false
	I0311 13:49:19.075162    3684 mustload.go:65] Loading cluster: multinode-457000
	I0311 13:49:19.075213    3684 notify.go:220] Checking for updates...
	I0311 13:49:19.075345    3684 config.go:182] Loaded profile config "multinode-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:49:19.075351    3684 status.go:255] checking status of multinode-457000 ...
	I0311 13:49:19.075545    3684 status.go:330] multinode-457000 host status = "Stopped" (err=<nil>)
	I0311 13:49:19.075550    3684 status.go:343] host is not running, skipping remaining checks
	I0311 13:49:19.075552    3684 status.go:257] multinode-457000 status: &{Name:multinode-457000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-457000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-457000 status -v=7 --alsologtostderr: exit status 7 (76.074417ms)

                                                
                                                
-- stdout --
	multinode-457000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:49:20.430396    3686 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:49:20.430574    3686 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:20.430579    3686 out.go:304] Setting ErrFile to fd 2...
	I0311 13:49:20.430582    3686 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:20.430741    3686 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:49:20.430894    3686 out.go:298] Setting JSON to false
	I0311 13:49:20.430909    3686 mustload.go:65] Loading cluster: multinode-457000
	I0311 13:49:20.430943    3686 notify.go:220] Checking for updates...
	I0311 13:49:20.431133    3686 config.go:182] Loaded profile config "multinode-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:49:20.431140    3686 status.go:255] checking status of multinode-457000 ...
	I0311 13:49:20.431428    3686 status.go:330] multinode-457000 host status = "Stopped" (err=<nil>)
	I0311 13:49:20.431433    3686 status.go:343] host is not running, skipping remaining checks
	I0311 13:49:20.431436    3686 status.go:257] multinode-457000 status: &{Name:multinode-457000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-457000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-457000 status -v=7 --alsologtostderr: exit status 7 (76.053417ms)

                                                
                                                
-- stdout --
	multinode-457000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:49:22.225444    3688 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:49:22.225615    3688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:22.225620    3688 out.go:304] Setting ErrFile to fd 2...
	I0311 13:49:22.225623    3688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:22.225781    3688 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:49:22.225952    3688 out.go:298] Setting JSON to false
	I0311 13:49:22.225967    3688 mustload.go:65] Loading cluster: multinode-457000
	I0311 13:49:22.226003    3688 notify.go:220] Checking for updates...
	I0311 13:49:22.226244    3688 config.go:182] Loaded profile config "multinode-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:49:22.226253    3688 status.go:255] checking status of multinode-457000 ...
	I0311 13:49:22.226521    3688 status.go:330] multinode-457000 host status = "Stopped" (err=<nil>)
	I0311 13:49:22.226526    3688 status.go:343] host is not running, skipping remaining checks
	I0311 13:49:22.226529    3688 status.go:257] multinode-457000 status: &{Name:multinode-457000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-457000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-457000 status -v=7 --alsologtostderr: exit status 7 (76.411833ms)

                                                
                                                
-- stdout --
	multinode-457000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:49:25.204981    3690 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:49:25.205132    3690 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:25.205137    3690 out.go:304] Setting ErrFile to fd 2...
	I0311 13:49:25.205140    3690 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:25.205286    3690 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:49:25.205447    3690 out.go:298] Setting JSON to false
	I0311 13:49:25.205462    3690 mustload.go:65] Loading cluster: multinode-457000
	I0311 13:49:25.205505    3690 notify.go:220] Checking for updates...
	I0311 13:49:25.205702    3690 config.go:182] Loaded profile config "multinode-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:49:25.205710    3690 status.go:255] checking status of multinode-457000 ...
	I0311 13:49:25.205983    3690 status.go:330] multinode-457000 host status = "Stopped" (err=<nil>)
	I0311 13:49:25.205988    3690 status.go:343] host is not running, skipping remaining checks
	I0311 13:49:25.205991    3690 status.go:257] multinode-457000 status: &{Name:multinode-457000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-457000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-457000 status -v=7 --alsologtostderr: exit status 7 (74.664ms)

                                                
                                                
-- stdout --
	multinode-457000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:49:27.588511    3692 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:49:27.588679    3692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:27.588684    3692 out.go:304] Setting ErrFile to fd 2...
	I0311 13:49:27.588688    3692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:27.588842    3692 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:49:27.589006    3692 out.go:298] Setting JSON to false
	I0311 13:49:27.589022    3692 mustload.go:65] Loading cluster: multinode-457000
	I0311 13:49:27.589066    3692 notify.go:220] Checking for updates...
	I0311 13:49:27.589270    3692 config.go:182] Loaded profile config "multinode-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:49:27.589278    3692 status.go:255] checking status of multinode-457000 ...
	I0311 13:49:27.589541    3692 status.go:330] multinode-457000 host status = "Stopped" (err=<nil>)
	I0311 13:49:27.589546    3692 status.go:343] host is not running, skipping remaining checks
	I0311 13:49:27.589549    3692 status.go:257] multinode-457000 status: &{Name:multinode-457000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0311 13:49:29.019966    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-457000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-457000 status -v=7 --alsologtostderr: exit status 7 (73.363958ms)

                                                
                                                
-- stdout --
	multinode-457000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:49:32.630225    3694 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:49:32.630419    3694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:32.630423    3694 out.go:304] Setting ErrFile to fd 2...
	I0311 13:49:32.630426    3694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:32.630577    3694 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:49:32.630728    3694 out.go:298] Setting JSON to false
	I0311 13:49:32.630742    3694 mustload.go:65] Loading cluster: multinode-457000
	I0311 13:49:32.630784    3694 notify.go:220] Checking for updates...
	I0311 13:49:32.631000    3694 config.go:182] Loaded profile config "multinode-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:49:32.631008    3694 status.go:255] checking status of multinode-457000 ...
	I0311 13:49:32.631276    3694 status.go:330] multinode-457000 host status = "Stopped" (err=<nil>)
	I0311 13:49:32.631280    3694 status.go:343] host is not running, skipping remaining checks
	I0311 13:49:32.631285    3694 status.go:257] multinode-457000 status: &{Name:multinode-457000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-457000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-457000 status -v=7 --alsologtostderr: exit status 7 (75.836ms)

                                                
                                                
-- stdout --
	multinode-457000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:49:42.936432    3699 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:49:42.936599    3699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:42.936603    3699 out.go:304] Setting ErrFile to fd 2...
	I0311 13:49:42.936607    3699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:42.936756    3699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:49:42.936922    3699 out.go:298] Setting JSON to false
	I0311 13:49:42.936938    3699 mustload.go:65] Loading cluster: multinode-457000
	I0311 13:49:42.936976    3699 notify.go:220] Checking for updates...
	I0311 13:49:42.937184    3699 config.go:182] Loaded profile config "multinode-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:49:42.937192    3699 status.go:255] checking status of multinode-457000 ...
	I0311 13:49:42.937449    3699 status.go:330] multinode-457000 host status = "Stopped" (err=<nil>)
	I0311 13:49:42.937453    3699 status.go:343] host is not running, skipping remaining checks
	I0311 13:49:42.937456    3699 status.go:257] multinode-457000 status: &{Name:multinode-457000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-457000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-457000 status -v=7 --alsologtostderr: exit status 7 (75.600334ms)

                                                
                                                
-- stdout --
	multinode-457000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:49:49.182438    3701 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:49:49.182636    3701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:49.182640    3701 out.go:304] Setting ErrFile to fd 2...
	I0311 13:49:49.182643    3701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:49:49.182815    3701 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:49:49.182999    3701 out.go:298] Setting JSON to false
	I0311 13:49:49.183014    3701 mustload.go:65] Loading cluster: multinode-457000
	I0311 13:49:49.183056    3701 notify.go:220] Checking for updates...
	I0311 13:49:49.183258    3701 config.go:182] Loaded profile config "multinode-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:49:49.183265    3701 status.go:255] checking status of multinode-457000 ...
	I0311 13:49:49.183517    3701 status.go:330] multinode-457000 host status = "Stopped" (err=<nil>)
	I0311 13:49:49.183521    3701 status.go:343] host is not running, skipping remaining checks
	I0311 13:49:49.183524    3701 status.go:257] multinode-457000 status: &{Name:multinode-457000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-457000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-457000 status -v=7 --alsologtostderr: exit status 7 (75.844958ms)

                                                
                                                
-- stdout --
	multinode-457000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:50:10.739667    3706 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:50:10.739885    3706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:50:10.739890    3706 out.go:304] Setting ErrFile to fd 2...
	I0311 13:50:10.739893    3706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:50:10.740053    3706 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:50:10.740218    3706 out.go:298] Setting JSON to false
	I0311 13:50:10.740235    3706 mustload.go:65] Loading cluster: multinode-457000
	I0311 13:50:10.740266    3706 notify.go:220] Checking for updates...
	I0311 13:50:10.740487    3706 config.go:182] Loaded profile config "multinode-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:50:10.740495    3706 status.go:255] checking status of multinode-457000 ...
	I0311 13:50:10.740783    3706 status.go:330] multinode-457000 host status = "Stopped" (err=<nil>)
	I0311 13:50:10.740788    3706 status.go:343] host is not running, skipping remaining checks
	I0311 13:50:10.740791    3706 status.go:257] multinode-457000 status: &{Name:multinode-457000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-457000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000: exit status 7 (34.163417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (51.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-457000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-457000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-457000: (3.25913725s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-457000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-457000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.211332s)

                                                
                                                
-- stdout --
	* [multinode-457000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-457000" primary control-plane node in "multinode-457000" cluster
	* Restarting existing qemu2 VM for "multinode-457000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-457000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:50:14.128611    3730 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:50:14.128771    3730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:50:14.128775    3730 out.go:304] Setting ErrFile to fd 2...
	I0311 13:50:14.128778    3730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:50:14.128922    3730 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:50:14.130058    3730 out.go:298] Setting JSON to false
	I0311 13:50:14.148645    3730 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2985,"bootTime":1710187229,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 13:50:14.148700    3730 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 13:50:14.153052    3730 out.go:177] * [multinode-457000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 13:50:14.161064    3730 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 13:50:14.161093    3730 notify.go:220] Checking for updates...
	I0311 13:50:14.164992    3730 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:50:14.168015    3730 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 13:50:14.171014    3730 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:50:14.173966    3730 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 13:50:14.176987    3730 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 13:50:14.180336    3730 config.go:182] Loaded profile config "multinode-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:50:14.180417    3730 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:50:14.184980    3730 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 13:50:14.192037    3730 start.go:297] selected driver: qemu2
	I0311 13:50:14.192043    3730 start.go:901] validating driver "qemu2" against &{Name:multinode-457000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:multinode-457000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:50:14.192098    3730 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 13:50:14.194477    3730 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 13:50:14.194507    3730 cni.go:84] Creating CNI manager for ""
	I0311 13:50:14.194512    3730 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0311 13:50:14.194552    3730 start.go:340] cluster config:
	{Name:multinode-457000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-457000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:50:14.199200    3730 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:50:14.205999    3730 out.go:177] * Starting "multinode-457000" primary control-plane node in "multinode-457000" cluster
	I0311 13:50:14.208982    3730 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 13:50:14.208998    3730 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 13:50:14.209010    3730 cache.go:56] Caching tarball of preloaded images
	I0311 13:50:14.209076    3730 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 13:50:14.209086    3730 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 13:50:14.209152    3730 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/multinode-457000/config.json ...
	I0311 13:50:14.209644    3730 start.go:360] acquireMachinesLock for multinode-457000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 13:50:14.209679    3730 start.go:364] duration metric: took 28.833µs to acquireMachinesLock for "multinode-457000"
	I0311 13:50:14.209689    3730 start.go:96] Skipping create...Using existing machine configuration
	I0311 13:50:14.209694    3730 fix.go:54] fixHost starting: 
	I0311 13:50:14.209819    3730 fix.go:112] recreateIfNeeded on multinode-457000: state=Stopped err=<nil>
	W0311 13:50:14.209827    3730 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 13:50:14.214030    3730 out.go:177] * Restarting existing qemu2 VM for "multinode-457000" ...
	I0311 13:50:14.220989    3730 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:69:d4:92:96:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/disk.qcow2
	I0311 13:50:14.223033    3730 main.go:141] libmachine: STDOUT: 
	I0311 13:50:14.223054    3730 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 13:50:14.223085    3730 fix.go:56] duration metric: took 13.390209ms for fixHost
	I0311 13:50:14.223089    3730 start.go:83] releasing machines lock for "multinode-457000", held for 13.405958ms
	W0311 13:50:14.223096    3730 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 13:50:14.223129    3730 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:50:14.223134    3730 start.go:728] Will try again in 5 seconds ...
	I0311 13:50:19.225171    3730 start.go:360] acquireMachinesLock for multinode-457000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 13:50:19.225504    3730 start.go:364] duration metric: took 245.917µs to acquireMachinesLock for "multinode-457000"
	I0311 13:50:19.225637    3730 start.go:96] Skipping create...Using existing machine configuration
	I0311 13:50:19.225653    3730 fix.go:54] fixHost starting: 
	I0311 13:50:19.226257    3730 fix.go:112] recreateIfNeeded on multinode-457000: state=Stopped err=<nil>
	W0311 13:50:19.226288    3730 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 13:50:19.229711    3730 out.go:177] * Restarting existing qemu2 VM for "multinode-457000" ...
	I0311 13:50:19.233924    3730 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:69:d4:92:96:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/disk.qcow2
	I0311 13:50:19.243415    3730 main.go:141] libmachine: STDOUT: 
	I0311 13:50:19.243474    3730 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 13:50:19.243545    3730 fix.go:56] duration metric: took 17.888125ms for fixHost
	I0311 13:50:19.243557    3730 start.go:83] releasing machines lock for "multinode-457000", held for 18.032834ms
	W0311 13:50:19.243745    3730 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-457000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-457000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:50:19.252691    3730 out.go:177] 
	W0311 13:50:19.255836    3730 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 13:50:19.255873    3730 out.go:239] * 
	* 
	W0311 13:50:19.259094    3730 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 13:50:19.265695    3730 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-457000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-457000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000: exit status 7 (34.709292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-457000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-457000 node delete m03: exit status 83 (42.665959ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-457000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-457000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-457000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-457000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-457000 status --alsologtostderr: exit status 7 (31.951208ms)

                                                
                                                
-- stdout --
	multinode-457000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:50:19.458773    3744 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:50:19.458925    3744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:50:19.458928    3744 out.go:304] Setting ErrFile to fd 2...
	I0311 13:50:19.458931    3744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:50:19.459053    3744 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:50:19.459168    3744 out.go:298] Setting JSON to false
	I0311 13:50:19.459182    3744 mustload.go:65] Loading cluster: multinode-457000
	I0311 13:50:19.459245    3744 notify.go:220] Checking for updates...
	I0311 13:50:19.459375    3744 config.go:182] Loaded profile config "multinode-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:50:19.459381    3744 status.go:255] checking status of multinode-457000 ...
	I0311 13:50:19.459604    3744 status.go:330] multinode-457000 host status = "Stopped" (err=<nil>)
	I0311 13:50:19.459608    3744 status.go:343] host is not running, skipping remaining checks
	I0311 13:50:19.459610    3744 status.go:257] multinode-457000 status: &{Name:multinode-457000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-457000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000: exit status 7 (32.06925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (4.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-457000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-457000 stop: (3.881806042s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-457000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-457000 status: exit status 7 (66.580208ms)

                                                
                                                
-- stdout --
	multinode-457000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-457000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-457000 status --alsologtostderr: exit status 7 (34.040917ms)

                                                
                                                
-- stdout --
	multinode-457000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:50:23.473850    3770 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:50:23.474000    3770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:50:23.474003    3770 out.go:304] Setting ErrFile to fd 2...
	I0311 13:50:23.474005    3770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:50:23.474134    3770 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:50:23.474257    3770 out.go:298] Setting JSON to false
	I0311 13:50:23.474269    3770 mustload.go:65] Loading cluster: multinode-457000
	I0311 13:50:23.474330    3770 notify.go:220] Checking for updates...
	I0311 13:50:23.474464    3770 config.go:182] Loaded profile config "multinode-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:50:23.474469    3770 status.go:255] checking status of multinode-457000 ...
	I0311 13:50:23.474674    3770 status.go:330] multinode-457000 host status = "Stopped" (err=<nil>)
	I0311 13:50:23.474678    3770 status.go:343] host is not running, skipping remaining checks
	I0311 13:50:23.474680    3770 status.go:257] multinode-457000 status: &{Name:multinode-457000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-457000 status --alsologtostderr": multinode-457000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-457000 status --alsologtostderr": multinode-457000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000: exit status 7 (31.651125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (4.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-457000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-457000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.189145208s)

                                                
                                                
-- stdout --
	* [multinode-457000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-457000" primary control-plane node in "multinode-457000" cluster
	* Restarting existing qemu2 VM for "multinode-457000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-457000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:50:23.536746    3774 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:50:23.536884    3774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:50:23.536887    3774 out.go:304] Setting ErrFile to fd 2...
	I0311 13:50:23.536890    3774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:50:23.537020    3774 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:50:23.538086    3774 out.go:298] Setting JSON to false
	I0311 13:50:23.554044    3774 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2994,"bootTime":1710187229,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 13:50:23.554133    3774 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 13:50:23.559378    3774 out.go:177] * [multinode-457000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 13:50:23.567422    3774 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 13:50:23.571395    3774 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:50:23.567455    3774 notify.go:220] Checking for updates...
	I0311 13:50:23.574394    3774 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 13:50:23.577371    3774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:50:23.580327    3774 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 13:50:23.583382    3774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 13:50:23.586626    3774 config.go:182] Loaded profile config "multinode-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:50:23.586882    3774 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:50:23.591349    3774 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 13:50:23.598396    3774 start.go:297] selected driver: qemu2
	I0311 13:50:23.598403    3774 start.go:901] validating driver "qemu2" against &{Name:multinode-457000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:multinode-457000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:50:23.598486    3774 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 13:50:23.600798    3774 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 13:50:23.600848    3774 cni.go:84] Creating CNI manager for ""
	I0311 13:50:23.600853    3774 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0311 13:50:23.600893    3774 start.go:340] cluster config:
	{Name:multinode-457000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-457000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:50:23.605360    3774 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:50:23.612352    3774 out.go:177] * Starting "multinode-457000" primary control-plane node in "multinode-457000" cluster
	I0311 13:50:23.616229    3774 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 13:50:23.616243    3774 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 13:50:23.616253    3774 cache.go:56] Caching tarball of preloaded images
	I0311 13:50:23.616302    3774 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 13:50:23.616308    3774 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 13:50:23.616355    3774 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/multinode-457000/config.json ...
	I0311 13:50:23.616839    3774 start.go:360] acquireMachinesLock for multinode-457000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 13:50:23.616871    3774 start.go:364] duration metric: took 26.167µs to acquireMachinesLock for "multinode-457000"
	I0311 13:50:23.616881    3774 start.go:96] Skipping create...Using existing machine configuration
	I0311 13:50:23.616886    3774 fix.go:54] fixHost starting: 
	I0311 13:50:23.616999    3774 fix.go:112] recreateIfNeeded on multinode-457000: state=Stopped err=<nil>
	W0311 13:50:23.617008    3774 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 13:50:23.625388    3774 out.go:177] * Restarting existing qemu2 VM for "multinode-457000" ...
	I0311 13:50:23.629366    3774 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:69:d4:92:96:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/disk.qcow2
	I0311 13:50:23.631405    3774 main.go:141] libmachine: STDOUT: 
	I0311 13:50:23.631430    3774 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 13:50:23.631458    3774 fix.go:56] duration metric: took 14.572834ms for fixHost
	I0311 13:50:23.631462    3774 start.go:83] releasing machines lock for "multinode-457000", held for 14.586584ms
	W0311 13:50:23.631469    3774 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 13:50:23.631498    3774 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:50:23.631503    3774 start.go:728] Will try again in 5 seconds ...
	I0311 13:50:28.633608    3774 start.go:360] acquireMachinesLock for multinode-457000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 13:50:28.634048    3774 start.go:364] duration metric: took 335.875µs to acquireMachinesLock for "multinode-457000"
	I0311 13:50:28.634178    3774 start.go:96] Skipping create...Using existing machine configuration
	I0311 13:50:28.634200    3774 fix.go:54] fixHost starting: 
	I0311 13:50:28.634934    3774 fix.go:112] recreateIfNeeded on multinode-457000: state=Stopped err=<nil>
	W0311 13:50:28.634960    3774 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 13:50:28.640498    3774 out.go:177] * Restarting existing qemu2 VM for "multinode-457000" ...
	I0311 13:50:28.649625    3774 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:69:d4:92:96:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/multinode-457000/disk.qcow2
	I0311 13:50:28.660313    3774 main.go:141] libmachine: STDOUT: 
	I0311 13:50:28.660394    3774 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 13:50:28.660511    3774 fix.go:56] duration metric: took 26.309333ms for fixHost
	I0311 13:50:28.660533    3774 start.go:83] releasing machines lock for "multinode-457000", held for 26.461042ms
	W0311 13:50:28.660807    3774 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-457000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-457000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:50:28.669404    3774 out.go:177] 
	W0311 13:50:28.673455    3774 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 13:50:28.673481    3774 out.go:239] * 
	* 
	W0311 13:50:28.676453    3774 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 13:50:28.684349    3774 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-457000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000: exit status 7 (67.924917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-457000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-457000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-457000-m01 --driver=qemu2 : exit status 80 (9.8910945s)

                                                
                                                
-- stdout --
	* [multinode-457000-m01] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-457000-m01" primary control-plane node in "multinode-457000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-457000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-457000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-457000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-457000-m02 --driver=qemu2 : exit status 80 (10.08845s)

                                                
                                                
-- stdout --
	* [multinode-457000-m02] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-457000-m02" primary control-plane node in "multinode-457000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-457000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-457000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-457000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-457000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-457000: exit status 83 (81.698917ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-457000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-457000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-457000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-457000 -n multinode-457000: exit status 7 (33.104625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.24s)

                                                
                                    
x
+
TestPreload (9.89s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-516000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-516000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.719786208s)

                                                
                                                
-- stdout --
	* [test-preload-516000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-516000" primary control-plane node in "test-preload-516000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-516000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:50:49.166468    3837 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:50:49.166581    3837 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:50:49.166584    3837 out.go:304] Setting ErrFile to fd 2...
	I0311 13:50:49.166587    3837 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:50:49.166718    3837 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:50:49.167765    3837 out.go:298] Setting JSON to false
	I0311 13:50:49.183737    3837 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3020,"bootTime":1710187229,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 13:50:49.183796    3837 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 13:50:49.189542    3837 out.go:177] * [test-preload-516000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 13:50:49.197456    3837 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 13:50:49.197486    3837 notify.go:220] Checking for updates...
	I0311 13:50:49.202473    3837 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:50:49.205441    3837 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 13:50:49.208520    3837 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:50:49.211476    3837 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 13:50:49.212878    3837 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 13:50:49.215719    3837 config.go:182] Loaded profile config "multinode-457000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:50:49.215776    3837 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:50:49.219473    3837 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 13:50:49.224468    3837 start.go:297] selected driver: qemu2
	I0311 13:50:49.224474    3837 start.go:901] validating driver "qemu2" against <nil>
	I0311 13:50:49.224481    3837 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 13:50:49.226725    3837 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 13:50:49.229442    3837 out.go:177] * Automatically selected the socket_vmnet network
	I0311 13:50:49.232567    3837 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 13:50:49.232599    3837 cni.go:84] Creating CNI manager for ""
	I0311 13:50:49.232605    3837 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 13:50:49.232610    3837 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 13:50:49.232637    3837 start.go:340] cluster config:
	{Name:test-preload-516000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-516000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:50:49.237043    3837 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:50:49.244477    3837 out.go:177] * Starting "test-preload-516000" primary control-plane node in "test-preload-516000" cluster
	I0311 13:50:49.248444    3837 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0311 13:50:49.248532    3837 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/test-preload-516000/config.json ...
	I0311 13:50:49.248557    3837 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/test-preload-516000/config.json: {Name:mk4542c59a392d5e41171292b1a336eb661c7ec6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:50:49.248582    3837 cache.go:107] acquiring lock: {Name:mkc90b595b88f4abeb655b3d9dc69d8b56b767a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:50:49.248606    3837 cache.go:107] acquiring lock: {Name:mk6cbe1f5d618d9a231cfd1cc38b06cfdaea58c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:50:49.248643    3837 cache.go:107] acquiring lock: {Name:mk66f923584bfe97b6e859e803888457f9be102f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:50:49.248783    3837 cache.go:107] acquiring lock: {Name:mka632e5e7844f05609417f6270af956dff05878 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:50:49.248838    3837 cache.go:107] acquiring lock: {Name:mkdecce72dd677752704b703e7476437372a6f7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:50:49.248848    3837 cache.go:107] acquiring lock: {Name:mkf6a4396bb7b773fb62deda5ab948d333b688de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:50:49.248873    3837 cache.go:107] acquiring lock: {Name:mk054c61a9b371c6d9b043a1c6319be685da2eb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:50:49.249073    3837 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 13:50:49.249091    3837 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0311 13:50:49.249113    3837 cache.go:107] acquiring lock: {Name:mkf1b13302478506bd7cd4847eab6e9a77e93d2e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:50:49.249136    3837 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0311 13:50:49.249149    3837 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0311 13:50:49.249143    3837 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0311 13:50:49.249201    3837 start.go:360] acquireMachinesLock for test-preload-516000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 13:50:49.249152    3837 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0311 13:50:49.249278    3837 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0311 13:50:49.249311    3837 start.go:364] duration metric: took 86.708µs to acquireMachinesLock for "test-preload-516000"
	I0311 13:50:49.249327    3837 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0311 13:50:49.249358    3837 start.go:93] Provisioning new machine with config: &{Name:test-preload-516000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-516000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 13:50:49.249395    3837 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 13:50:49.258451    3837 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 13:50:49.263445    3837 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0311 13:50:49.264450    3837 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0311 13:50:49.264581    3837 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0311 13:50:49.264719    3837 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0311 13:50:49.268917    3837 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0311 13:50:49.269024    3837 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0311 13:50:49.269093    3837 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 13:50:49.269162    3837 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0311 13:50:49.276341    3837 start.go:159] libmachine.API.Create for "test-preload-516000" (driver="qemu2")
	I0311 13:50:49.276355    3837 client.go:168] LocalClient.Create starting
	I0311 13:50:49.276444    3837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 13:50:49.276473    3837 main.go:141] libmachine: Decoding PEM data...
	I0311 13:50:49.276481    3837 main.go:141] libmachine: Parsing certificate...
	I0311 13:50:49.276531    3837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 13:50:49.276553    3837 main.go:141] libmachine: Decoding PEM data...
	I0311 13:50:49.276561    3837 main.go:141] libmachine: Parsing certificate...
	I0311 13:50:49.276900    3837 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 13:50:49.415741    3837 main.go:141] libmachine: Creating SSH key...
	I0311 13:50:49.462179    3837 main.go:141] libmachine: Creating Disk image...
	I0311 13:50:49.462197    3837 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 13:50:49.462374    3837 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/test-preload-516000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/test-preload-516000/disk.qcow2
	I0311 13:50:49.475217    3837 main.go:141] libmachine: STDOUT: 
	I0311 13:50:49.475241    3837 main.go:141] libmachine: STDERR: 
	I0311 13:50:49.475306    3837 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/test-preload-516000/disk.qcow2 +20000M
	I0311 13:50:49.486998    3837 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 13:50:49.487035    3837 main.go:141] libmachine: STDERR: 
	I0311 13:50:49.487078    3837 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/test-preload-516000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/test-preload-516000/disk.qcow2
	I0311 13:50:49.487083    3837 main.go:141] libmachine: Starting QEMU VM...
	I0311 13:50:49.487110    3837 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/test-preload-516000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/test-preload-516000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/test-preload-516000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:64:04:e5:ea:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/test-preload-516000/disk.qcow2
	I0311 13:50:49.489228    3837 main.go:141] libmachine: STDOUT: 
	I0311 13:50:49.489250    3837 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 13:50:49.489273    3837 client.go:171] duration metric: took 212.918583ms to LocalClient.Create
	I0311 13:50:51.273387    3837 cache.go:162] opening:  /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0311 13:50:51.322904    3837 cache.go:162] opening:  /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0311 13:50:51.360585    3837 cache.go:162] opening:  /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0311 13:50:51.380379    3837 cache.go:162] opening:  /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0311 13:50:51.382769    3837 cache.go:162] opening:  /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0311 13:50:51.383194    3837 cache.go:162] opening:  /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0311 13:50:51.394232    3837 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0311 13:50:51.394325    3837 cache.go:162] opening:  /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0311 13:50:51.490003    3837 start.go:128] duration metric: took 2.2406595s to createHost
	I0311 13:50:51.490043    3837 start.go:83] releasing machines lock for "test-preload-516000", held for 2.2407855s
	W0311 13:50:51.490095    3837 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:50:51.500686    3837 out.go:177] * Deleting "test-preload-516000" in qemu2 ...
	I0311 13:50:51.508882    3837 cache.go:157] /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0311 13:50:51.508923    3837 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.259876041s
	I0311 13:50:51.508961    3837 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0311 13:50:51.524366    3837 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:50:51.524400    3837 start.go:728] Will try again in 5 seconds ...
	W0311 13:50:52.249750    3837 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0311 13:50:52.249866    3837 cache.go:162] opening:  /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0311 13:50:52.382980    3837 cache.go:157] /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0311 13:50:52.383045    3837 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.134296292s
	I0311 13:50:52.383072    3837 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0311 13:50:52.712425    3837 cache.go:157] /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0311 13:50:52.712498    3837 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.463775958s
	I0311 13:50:52.712535    3837 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0311 13:50:54.200127    3837 cache.go:157] /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0311 13:50:54.200177    3837 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 4.951751833s
	I0311 13:50:54.200204    3837 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0311 13:50:54.794852    3837 cache.go:157] /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0311 13:50:54.794931    3837 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.546343375s
	I0311 13:50:54.794959    3837 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0311 13:50:54.904281    3837 cache.go:157] /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0311 13:50:54.904339    3837 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.655889416s
	I0311 13:50:54.904374    3837 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0311 13:50:55.873648    3837 cache.go:157] /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0311 13:50:55.873694    3837 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.625333291s
	I0311 13:50:55.873727    3837 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0311 13:50:56.524493    3837 start.go:360] acquireMachinesLock for test-preload-516000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 13:50:56.524928    3837 start.go:364] duration metric: took 367.166µs to acquireMachinesLock for "test-preload-516000"
	I0311 13:50:56.525062    3837 start.go:93] Provisioning new machine with config: &{Name:test-preload-516000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-516000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 13:50:56.525297    3837 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 13:50:56.533903    3837 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 13:50:56.582087    3837 start.go:159] libmachine.API.Create for "test-preload-516000" (driver="qemu2")
	I0311 13:50:56.582128    3837 client.go:168] LocalClient.Create starting
	I0311 13:50:56.582279    3837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 13:50:56.582344    3837 main.go:141] libmachine: Decoding PEM data...
	I0311 13:50:56.582363    3837 main.go:141] libmachine: Parsing certificate...
	I0311 13:50:56.582424    3837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 13:50:56.582487    3837 main.go:141] libmachine: Decoding PEM data...
	I0311 13:50:56.582523    3837 main.go:141] libmachine: Parsing certificate...
	I0311 13:50:56.583052    3837 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 13:50:56.729866    3837 main.go:141] libmachine: Creating SSH key...
	I0311 13:50:56.783439    3837 main.go:141] libmachine: Creating Disk image...
	I0311 13:50:56.783445    3837 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 13:50:56.783597    3837 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/test-preload-516000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/test-preload-516000/disk.qcow2
	I0311 13:50:56.795934    3837 main.go:141] libmachine: STDOUT: 
	I0311 13:50:56.795952    3837 main.go:141] libmachine: STDERR: 
	I0311 13:50:56.796010    3837 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/test-preload-516000/disk.qcow2 +20000M
	I0311 13:50:56.806852    3837 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 13:50:56.806889    3837 main.go:141] libmachine: STDERR: 
	I0311 13:50:56.806901    3837 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/test-preload-516000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/test-preload-516000/disk.qcow2
	I0311 13:50:56.806904    3837 main.go:141] libmachine: Starting QEMU VM...
	I0311 13:50:56.806945    3837 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/test-preload-516000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/test-preload-516000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/test-preload-516000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:f1:30:d0:ca:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/test-preload-516000/disk.qcow2
	I0311 13:50:56.808815    3837 main.go:141] libmachine: STDOUT: 
	I0311 13:50:56.808839    3837 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 13:50:56.808855    3837 client.go:171] duration metric: took 226.719292ms to LocalClient.Create
	I0311 13:50:58.809111    3837 start.go:128] duration metric: took 2.283859417s to createHost
	I0311 13:50:58.809151    3837 start.go:83] releasing machines lock for "test-preload-516000", held for 2.284274417s
	W0311 13:50:58.809489    3837 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-516000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-516000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:50:58.825112    3837 out.go:177] 
	W0311 13:50:58.828259    3837 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 13:50:58.828295    3837 out.go:239] * 
	* 
	W0311 13:50:58.831034    3837 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 13:50:58.840073    3837 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-516000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-03-11 13:50:58.857306 -0700 PDT m=+2482.178715376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-516000 -n test-preload-516000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-516000 -n test-preload-516000: exit status 7 (67.126333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-516000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-516000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-516000
--- FAIL: TestPreload (9.89s)

                                                
                                    
x
+
TestScheduledStopUnix (9.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-487000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-487000 --memory=2048 --driver=qemu2 : exit status 80 (9.802534542s)

                                                
                                                
-- stdout --
	* [scheduled-stop-487000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-487000" primary control-plane node in "scheduled-stop-487000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-487000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-487000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-487000" primary control-plane node in "scheduled-stop-487000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-487000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-487000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-03-11 13:51:08.827461 -0700 PDT m=+2492.149191293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-487000 -n scheduled-stop-487000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-487000 -n scheduled-stop-487000: exit status 7 (69.689959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-487000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-487000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-487000
--- FAIL: TestScheduledStopUnix (9.98s)

                                                
                                    
x
+
TestSkaffold (16.57s)

                                                
                                                
=== RUN   TestSkaffold
E0311 13:51:14.140844    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe288038658 version
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-287000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-287000 --memory=2600 --driver=qemu2 : exit status 80 (9.953880416s)

                                                
                                                
-- stdout --
	* [skaffold-287000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-287000" primary control-plane node in "skaffold-287000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-287000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-287000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-287000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-287000" primary control-plane node in "skaffold-287000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-287000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-287000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-03-11 13:51:25.403836 -0700 PDT m=+2508.726098918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-287000 -n skaffold-287000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-287000 -n skaffold-287000: exit status 7 (64.18175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-287000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-287000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-287000
--- FAIL: TestSkaffold (16.57s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (669.49s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1951919121 start -p running-upgrade-168000 --memory=2200 --vm-driver=qemu2 
E0311 13:52:32.085540    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1951919121 start -p running-upgrade-168000 --memory=2200 --vm-driver=qemu2 : (1m34.04914625s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-168000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0311 13:54:29.008296    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
E0311 13:56:14.134217    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
E0311 13:59:17.194972    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
E0311 13:59:29.000250    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
E0311 14:01:14.123491    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-168000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m58.171112042s)

                                                
                                                
-- stdout --
	* [running-upgrade-168000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-168000" primary control-plane node in "running-upgrade-168000" cluster
	* Updating the running qemu2 "running-upgrade-168000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:53:24.376615    4162 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:53:24.376731    4162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:53:24.376734    4162 out.go:304] Setting ErrFile to fd 2...
	I0311 13:53:24.376736    4162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:53:24.376851    4162 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:53:24.377922    4162 out.go:298] Setting JSON to false
	I0311 13:53:24.394632    4162 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3175,"bootTime":1710187229,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 13:53:24.394732    4162 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 13:53:24.399119    4162 out.go:177] * [running-upgrade-168000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 13:53:24.406054    4162 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 13:53:24.410063    4162 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:53:24.406158    4162 notify.go:220] Checking for updates...
	I0311 13:53:24.416027    4162 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 13:53:24.419063    4162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:53:24.420284    4162 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 13:53:24.423053    4162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 13:53:24.426261    4162 config.go:182] Loaded profile config "running-upgrade-168000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0311 13:53:24.429051    4162 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0311 13:53:24.432042    4162 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:53:24.436090    4162 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 13:53:24.443036    4162 start.go:297] selected driver: qemu2
	I0311 13:53:24.443040    4162 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-168000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50368 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-168000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0311 13:53:24.443085    4162 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 13:53:24.445170    4162 cni.go:84] Creating CNI manager for ""
	I0311 13:53:24.445189    4162 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 13:53:24.445214    4162 start.go:340] cluster config:
	{Name:running-upgrade-168000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50368 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-168000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0311 13:53:24.445259    4162 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:53:24.452986    4162 out.go:177] * Starting "running-upgrade-168000" primary control-plane node in "running-upgrade-168000" cluster
	I0311 13:53:24.457038    4162 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0311 13:53:24.457050    4162 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0311 13:53:24.457055    4162 cache.go:56] Caching tarball of preloaded images
	I0311 13:53:24.457097    4162 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 13:53:24.457102    4162 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0311 13:53:24.457146    4162 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/running-upgrade-168000/config.json ...
	I0311 13:53:24.457468    4162 start.go:360] acquireMachinesLock for running-upgrade-168000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 13:53:36.088072    4162 start.go:364] duration metric: took 11.630969875s to acquireMachinesLock for "running-upgrade-168000"
	I0311 13:53:36.088112    4162 start.go:96] Skipping create...Using existing machine configuration
	I0311 13:53:36.088118    4162 fix.go:54] fixHost starting: 
	I0311 13:53:36.088909    4162 fix.go:112] recreateIfNeeded on running-upgrade-168000: state=Running err=<nil>
	W0311 13:53:36.088920    4162 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 13:53:36.091921    4162 out.go:177] * Updating the running qemu2 "running-upgrade-168000" VM ...
	I0311 13:53:36.099846    4162 machine.go:94] provisionDockerMachine start ...
	I0311 13:53:36.099903    4162 main.go:141] libmachine: Using SSH client type: native
	I0311 13:53:36.100049    4162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a49a90] 0x104a4c2f0 <nil>  [] 0s} localhost 50306 <nil> <nil>}
	I0311 13:53:36.100054    4162 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 13:53:36.154632    4162 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-168000
	
	I0311 13:53:36.154649    4162 buildroot.go:166] provisioning hostname "running-upgrade-168000"
	I0311 13:53:36.154708    4162 main.go:141] libmachine: Using SSH client type: native
	I0311 13:53:36.154809    4162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a49a90] 0x104a4c2f0 <nil>  [] 0s} localhost 50306 <nil> <nil>}
	I0311 13:53:36.154815    4162 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-168000 && echo "running-upgrade-168000" | sudo tee /etc/hostname
	I0311 13:53:36.216747    4162 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-168000
	
	I0311 13:53:36.216808    4162 main.go:141] libmachine: Using SSH client type: native
	I0311 13:53:36.216920    4162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a49a90] 0x104a4c2f0 <nil>  [] 0s} localhost 50306 <nil> <nil>}
	I0311 13:53:36.216929    4162 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-168000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-168000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-168000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 13:53:36.270277    4162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 13:53:36.270288    4162 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18358-1220/.minikube CaCertPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18358-1220/.minikube}
	I0311 13:53:36.270294    4162 buildroot.go:174] setting up certificates
	I0311 13:53:36.270304    4162 provision.go:84] configureAuth start
	I0311 13:53:36.270309    4162 provision.go:143] copyHostCerts
	I0311 13:53:36.270360    4162 exec_runner.go:144] found /Users/jenkins/minikube-integration/18358-1220/.minikube/cert.pem, removing ...
	I0311 13:53:36.270366    4162 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18358-1220/.minikube/cert.pem
	I0311 13:53:36.270473    4162 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18358-1220/.minikube/cert.pem (1123 bytes)
	I0311 13:53:36.270637    4162 exec_runner.go:144] found /Users/jenkins/minikube-integration/18358-1220/.minikube/key.pem, removing ...
	I0311 13:53:36.270640    4162 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18358-1220/.minikube/key.pem
	I0311 13:53:36.270676    4162 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18358-1220/.minikube/key.pem (1675 bytes)
	I0311 13:53:36.270767    4162 exec_runner.go:144] found /Users/jenkins/minikube-integration/18358-1220/.minikube/ca.pem, removing ...
	I0311 13:53:36.270770    4162 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18358-1220/.minikube/ca.pem
	I0311 13:53:36.270805    4162 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18358-1220/.minikube/ca.pem (1082 bytes)
	I0311 13:53:36.270889    4162 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-168000 san=[127.0.0.1 localhost minikube running-upgrade-168000]
	I0311 13:53:36.555451    4162 provision.go:177] copyRemoteCerts
	I0311 13:53:36.555486    4162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 13:53:36.555494    4162 sshutil.go:53] new ssh client: &{IP:localhost Port:50306 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/running-upgrade-168000/id_rsa Username:docker}
	I0311 13:53:36.585673    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 13:53:36.592432    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0311 13:53:36.599479    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 13:53:36.606859    4162 provision.go:87] duration metric: took 336.553208ms to configureAuth
	I0311 13:53:36.606873    4162 buildroot.go:189] setting minikube options for container-runtime
	I0311 13:53:36.606987    4162 config.go:182] Loaded profile config "running-upgrade-168000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0311 13:53:36.607034    4162 main.go:141] libmachine: Using SSH client type: native
	I0311 13:53:36.607128    4162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a49a90] 0x104a4c2f0 <nil>  [] 0s} localhost 50306 <nil> <nil>}
	I0311 13:53:36.607133    4162 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0311 13:53:36.663102    4162 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0311 13:53:36.663113    4162 buildroot.go:70] root file system type: tmpfs
	I0311 13:53:36.663165    4162 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0311 13:53:36.663213    4162 main.go:141] libmachine: Using SSH client type: native
	I0311 13:53:36.663321    4162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a49a90] 0x104a4c2f0 <nil>  [] 0s} localhost 50306 <nil> <nil>}
	I0311 13:53:36.663359    4162 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0311 13:53:36.721685    4162 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0311 13:53:36.721741    4162 main.go:141] libmachine: Using SSH client type: native
	I0311 13:53:36.721865    4162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a49a90] 0x104a4c2f0 <nil>  [] 0s} localhost 50306 <nil> <nil>}
	I0311 13:53:36.721875    4162 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0311 13:53:36.779705    4162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 13:53:36.779720    4162 machine.go:97] duration metric: took 679.887541ms to provisionDockerMachine
	I0311 13:53:36.779726    4162 start.go:293] postStartSetup for "running-upgrade-168000" (driver="qemu2")
	I0311 13:53:36.779732    4162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 13:53:36.779783    4162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 13:53:36.779792    4162 sshutil.go:53] new ssh client: &{IP:localhost Port:50306 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/running-upgrade-168000/id_rsa Username:docker}
	I0311 13:53:36.811374    4162 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 13:53:36.812733    4162 info.go:137] Remote host: Buildroot 2021.02.12
	I0311 13:53:36.812743    4162 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18358-1220/.minikube/addons for local assets ...
	I0311 13:53:36.812806    4162 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18358-1220/.minikube/files for local assets ...
	I0311 13:53:36.812884    4162 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18358-1220/.minikube/files/etc/ssl/certs/16522.pem -> 16522.pem in /etc/ssl/certs
	I0311 13:53:36.812979    4162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 13:53:36.815695    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/files/etc/ssl/certs/16522.pem --> /etc/ssl/certs/16522.pem (1708 bytes)
	I0311 13:53:36.822838    4162 start.go:296] duration metric: took 43.108333ms for postStartSetup
	I0311 13:53:36.822856    4162 fix.go:56] duration metric: took 734.763125ms for fixHost
	I0311 13:53:36.822899    4162 main.go:141] libmachine: Using SSH client type: native
	I0311 13:53:36.822999    4162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a49a90] 0x104a4c2f0 <nil>  [] 0s} localhost 50306 <nil> <nil>}
	I0311 13:53:36.823004    4162 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0311 13:53:36.880763    4162 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710190416.948955791
	
	I0311 13:53:36.880774    4162 fix.go:216] guest clock: 1710190416.948955791
	I0311 13:53:36.880783    4162 fix.go:229] Guest: 2024-03-11 13:53:36.948955791 -0700 PDT Remote: 2024-03-11 13:53:36.822858 -0700 PDT m=+12.469668709 (delta=126.097791ms)
	I0311 13:53:36.880794    4162 fix.go:200] guest clock delta is within tolerance: 126.097791ms
	I0311 13:53:36.880797    4162 start.go:83] releasing machines lock for "running-upgrade-168000", held for 792.742084ms
	I0311 13:53:36.880861    4162 ssh_runner.go:195] Run: cat /version.json
	I0311 13:53:36.880869    4162 sshutil.go:53] new ssh client: &{IP:localhost Port:50306 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/running-upgrade-168000/id_rsa Username:docker}
	I0311 13:53:36.880879    4162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 13:53:36.880900    4162 sshutil.go:53] new ssh client: &{IP:localhost Port:50306 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/running-upgrade-168000/id_rsa Username:docker}
	W0311 13:53:36.881554    4162 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50306: connect: connection refused
	I0311 13:53:36.881575    4162 retry.go:31] will retry after 318.841632ms: dial tcp [::1]:50306: connect: connection refused
	W0311 13:53:36.910062    4162 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0311 13:53:36.910123    4162 ssh_runner.go:195] Run: systemctl --version
	I0311 13:53:36.911923    4162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 13:53:36.913746    4162 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 13:53:36.913775    4162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0311 13:53:36.916636    4162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0311 13:53:36.921175    4162 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 13:53:36.921188    4162 start.go:494] detecting cgroup driver to use...
	I0311 13:53:36.921268    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 13:53:36.926585    4162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0311 13:53:36.929748    4162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0311 13:53:36.932488    4162 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0311 13:53:36.932511    4162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0311 13:53:36.935746    4162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0311 13:53:36.938950    4162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0311 13:53:36.942007    4162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0311 13:53:36.944726    4162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 13:53:36.947923    4162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0311 13:53:36.951159    4162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 13:53:36.953911    4162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 13:53:36.956434    4162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:53:37.062609    4162 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0311 13:53:37.073695    4162 start.go:494] detecting cgroup driver to use...
	I0311 13:53:37.073763    4162 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0311 13:53:37.080725    4162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 13:53:37.085734    4162 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 13:53:37.092736    4162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 13:53:37.097509    4162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0311 13:53:37.101793    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 13:53:37.107315    4162 ssh_runner.go:195] Run: which cri-dockerd
	I0311 13:53:37.109003    4162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0311 13:53:37.111719    4162 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0311 13:53:37.116839    4162 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0311 13:53:37.224404    4162 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0311 13:53:37.333199    4162 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0311 13:53:37.333254    4162 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0311 13:53:37.338988    4162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:53:37.440951    4162 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0311 13:53:54.013792    4162 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.573335541s)
	I0311 13:53:54.013871    4162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0311 13:53:54.019143    4162 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0311 13:53:54.026835    4162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0311 13:53:54.033089    4162 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0311 13:53:54.124590    4162 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0311 13:53:54.214937    4162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:53:54.298627    4162 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0311 13:53:54.304833    4162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0311 13:53:54.309612    4162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:53:54.403116    4162 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0311 13:53:54.444412    4162 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0311 13:53:54.444492    4162 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0311 13:53:54.446657    4162 start.go:562] Will wait 60s for crictl version
	I0311 13:53:54.446708    4162 ssh_runner.go:195] Run: which crictl
	I0311 13:53:54.448406    4162 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 13:53:54.460889    4162 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0311 13:53:54.460957    4162 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0311 13:53:54.477621    4162 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0311 13:53:54.497190    4162 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0311 13:53:54.497251    4162 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0311 13:53:54.498667    4162 kubeadm.go:877] updating cluster {Name:running-upgrade-168000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50368 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-168000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0311 13:53:54.498709    4162 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0311 13:53:54.498749    4162 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0311 13:53:54.509641    4162 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0311 13:53:54.509649    4162 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0311 13:53:54.509690    4162 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0311 13:53:54.512538    4162 ssh_runner.go:195] Run: which lz4
	I0311 13:53:54.513797    4162 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0311 13:53:54.514987    4162 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 13:53:54.514999    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0311 13:53:55.291595    4162 docker.go:649] duration metric: took 777.849208ms to copy over tarball
	I0311 13:53:55.291653    4162 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 13:53:56.498995    4162 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.207367083s)
	I0311 13:53:56.499012    4162 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 13:53:56.515394    4162 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0311 13:53:56.518386    4162 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0311 13:53:56.523562    4162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:53:56.605461    4162 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0311 13:53:56.839341    4162 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0311 13:53:56.855462    4162 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0311 13:53:56.855471    4162 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0311 13:53:56.855476    4162 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0311 13:53:56.864479    4162 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0311 13:53:56.864546    4162 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 13:53:56.864667    4162 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0311 13:53:56.864694    4162 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0311 13:53:56.864987    4162 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0311 13:53:56.864996    4162 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0311 13:53:56.865165    4162 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0311 13:53:56.865280    4162 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 13:53:56.874999    4162 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0311 13:53:56.875137    4162 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0311 13:53:56.875145    4162 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0311 13:53:56.875211    4162 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0311 13:53:56.875399    4162 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0311 13:53:56.875863    4162 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 13:53:56.875629    4162 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0311 13:53:56.875673    4162 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	W0311 13:53:58.786022    4162 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0311 13:53:58.786238    4162 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0311 13:53:58.801026    4162 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0311 13:53:58.801052    4162 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0311 13:53:58.801113    4162 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0311 13:53:58.813645    4162 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0311 13:53:58.813760    4162 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0311 13:53:58.815556    4162 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0311 13:53:58.815573    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0311 13:53:58.850346    4162 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0311 13:53:58.850360    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0311 13:53:58.870249    4162 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0311 13:53:58.891287    4162 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0311 13:53:58.891323    4162 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0311 13:53:58.891343    4162 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0311 13:53:58.891396    4162 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0311 13:53:58.901683    4162 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0311 13:53:58.901788    4162 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0311 13:53:58.903223    4162 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0311 13:53:58.903234    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0311 13:53:58.910385    4162 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0311 13:53:58.910400    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0311 13:53:58.912273    4162 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0311 13:53:58.938701    4162 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0311 13:53:58.941645    4162 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0311 13:53:58.942890    4162 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 13:53:58.948937    4162 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0311 13:53:58.948976    4162 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0311 13:53:58.948993    4162 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0311 13:53:58.949044    4162 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0311 13:53:58.952568    4162 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0311 13:53:58.953292    4162 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0311 13:53:58.953310    4162 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0311 13:53:58.953339    4162 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0311 13:53:58.959408    4162 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0311 13:53:58.959429    4162 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0311 13:53:58.959479    4162 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0311 13:53:58.971590    4162 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0311 13:53:58.971611    4162 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 13:53:58.971667    4162 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 13:53:58.976163    4162 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0311 13:53:58.981535    4162 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0311 13:53:58.981553    4162 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0311 13:53:58.981554    4162 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0311 13:53:58.981616    4162 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0311 13:53:58.989466    4162 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0311 13:53:58.997153    4162 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0311 13:53:58.997183    4162 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W0311 13:53:59.612525    4162 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0311 13:53:59.612642    4162 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 13:53:59.642246    4162 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0311 13:53:59.642272    4162 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 13:53:59.642336    4162 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 13:53:59.698799    4162 cache_images.go:92] duration metric: took 2.843404458s to LoadCachedImages
	W0311 13:53:59.698841    4162 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	I0311 13:53:59.698846    4162 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0311 13:53:59.698895    4162 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-168000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-168000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 13:53:59.698964    4162 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0311 13:53:59.816611    4162 cni.go:84] Creating CNI manager for ""
	I0311 13:53:59.816626    4162 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 13:53:59.816630    4162 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 13:53:59.816638    4162 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-168000 NodeName:running-upgrade-168000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 13:53:59.816709    4162 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-168000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 13:53:59.816760    4162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0311 13:53:59.820551    4162 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 13:53:59.820605    4162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 13:53:59.824422    4162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0311 13:53:59.832751    4162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 13:53:59.845561    4162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0311 13:53:59.873413    4162 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0311 13:53:59.875050    4162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:54:00.009270    4162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 13:54:00.017017    4162 certs.go:68] Setting up /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/running-upgrade-168000 for IP: 10.0.2.15
	I0311 13:54:00.017030    4162 certs.go:194] generating shared ca certs ...
	I0311 13:54:00.017038    4162 certs.go:226] acquiring lock for ca certs: {Name:mkd7f96dc3b50acb1e4b9ffed31996dfe6eec0f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:54:00.017205    4162 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/ca.key
	I0311 13:54:00.017263    4162 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/proxy-client-ca.key
	I0311 13:54:00.017270    4162 certs.go:256] generating profile certs ...
	I0311 13:54:00.017343    4162 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/running-upgrade-168000/client.key
	I0311 13:54:00.017358    4162 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/running-upgrade-168000/apiserver.key.331e748a
	I0311 13:54:00.017368    4162 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/running-upgrade-168000/apiserver.crt.331e748a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0311 13:54:00.116258    4162 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/running-upgrade-168000/apiserver.crt.331e748a ...
	I0311 13:54:00.116272    4162 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/running-upgrade-168000/apiserver.crt.331e748a: {Name:mk37ced38061b3155cad8b79c0eef23f2754e3e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:54:00.116569    4162 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/running-upgrade-168000/apiserver.key.331e748a ...
	I0311 13:54:00.116574    4162 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/running-upgrade-168000/apiserver.key.331e748a: {Name:mk5aaed6cda12163af79cabe710d32930eef0e06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:54:00.116698    4162 certs.go:381] copying /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/running-upgrade-168000/apiserver.crt.331e748a -> /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/running-upgrade-168000/apiserver.crt
	I0311 13:54:00.116823    4162 certs.go:385] copying /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/running-upgrade-168000/apiserver.key.331e748a -> /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/running-upgrade-168000/apiserver.key
	I0311 13:54:00.116974    4162 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/running-upgrade-168000/proxy-client.key
	I0311 13:54:00.117098    4162 certs.go:484] found cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/1652.pem (1338 bytes)
	W0311 13:54:00.117128    4162 certs.go:480] ignoring /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/1652_empty.pem, impossibly tiny 0 bytes
	I0311 13:54:00.117134    4162 certs.go:484] found cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 13:54:00.117158    4162 certs.go:484] found cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem (1082 bytes)
	I0311 13:54:00.117180    4162 certs.go:484] found cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem (1123 bytes)
	I0311 13:54:00.117205    4162 certs.go:484] found cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/key.pem (1675 bytes)
	I0311 13:54:00.117253    4162 certs.go:484] found cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/files/etc/ssl/certs/16522.pem (1708 bytes)
	I0311 13:54:00.117610    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 13:54:00.135059    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 13:54:00.165814    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 13:54:00.204453    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0311 13:54:00.217079    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/running-upgrade-168000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0311 13:54:00.234208    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/running-upgrade-168000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 13:54:00.253495    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/running-upgrade-168000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 13:54:00.280401    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/running-upgrade-168000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0311 13:54:00.293217    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 13:54:00.303430    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/1652.pem --> /usr/share/ca-certificates/1652.pem (1338 bytes)
	I0311 13:54:00.316957    4162 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/files/etc/ssl/certs/16522.pem --> /usr/share/ca-certificates/16522.pem (1708 bytes)
	I0311 13:54:00.336394    4162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 13:54:00.346476    4162 ssh_runner.go:195] Run: openssl version
	I0311 13:54:00.348634    4162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 13:54:00.354164    4162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:54:00.355700    4162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:11 /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:54:00.355734    4162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:54:00.364735    4162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 13:54:00.370292    4162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1652.pem && ln -fs /usr/share/ca-certificates/1652.pem /etc/ssl/certs/1652.pem"
	I0311 13:54:00.377218    4162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1652.pem
	I0311 13:54:00.382214    4162 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:18 /usr/share/ca-certificates/1652.pem
	I0311 13:54:00.382261    4162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1652.pem
	I0311 13:54:00.397724    4162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1652.pem /etc/ssl/certs/51391683.0"
	I0311 13:54:00.402937    4162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16522.pem && ln -fs /usr/share/ca-certificates/16522.pem /etc/ssl/certs/16522.pem"
	I0311 13:54:00.414292    4162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16522.pem
	I0311 13:54:00.417111    4162 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:18 /usr/share/ca-certificates/16522.pem
	I0311 13:54:00.417143    4162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16522.pem
	I0311 13:54:00.422177    4162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16522.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 13:54:00.430931    4162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 13:54:00.446359    4162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 13:54:00.452863    4162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 13:54:00.455239    4162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 13:54:00.458112    4162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 13:54:00.468030    4162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 13:54:00.474933    4162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 13:54:00.480563    4162 kubeadm.go:391] StartCluster: {Name:running-upgrade-168000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50368 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-168000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0311 13:54:00.480674    4162 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0311 13:54:00.515671    4162 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 13:54:00.534864    4162 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 13:54:00.534875    4162 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 13:54:00.534878    4162 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 13:54:00.534933    4162 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 13:54:00.542815    4162 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 13:54:00.543134    4162 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-168000" does not appear in /Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:54:00.543234    4162 kubeconfig.go:62] /Users/jenkins/minikube-integration/18358-1220/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-168000" cluster setting kubeconfig missing "running-upgrade-168000" context setting]
	I0311 13:54:00.543418    4162 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/kubeconfig: {Name:mkd61d3fa94ba0392c00bb2cce43bcec89e45a02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:54:00.543864    4162 kapi.go:59] client config for running-upgrade-168000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/running-upgrade-168000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/running-upgrade-168000/client.key", CAFile:"/Users/jenkins/minikube-integration/18358-1220/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105d37fd0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0311 13:54:00.544179    4162 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 13:54:00.551017    4162 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-168000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0311 13:54:00.551027    4162 kubeadm.go:1153] stopping kube-system containers ...
	I0311 13:54:00.551090    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0311 13:54:00.588538    4162 docker.go:483] Stopping containers: [f6b733455fba 349023bbfc53 a86df61f6344 09bd8b44c2fe a420e877d727 01bc52299f67 3d1e6e29e227 e0557adcc012 e1e73f352dca e26f5a5073c5 596824c05237 a762e2759285 7d9686e6741b 3fefa81fe585 157cbdaae87c ffce710cd787 cf1bb3d2f610 727dfaa36be2 70d63e22f7c1 244719e096ed 61a7e06809da 51cbacae3c00 fc72f5571a18 caf168dad16c 382fc925268c 43bac95b5b3d e16d4264f4bc 221bf7f37fb8 960d0facd9a2 76b4de848c64 5364f6fbeff9 144c0aaa76d1 d31a11d50ab6 40469759e598 216bea9c9a75 85ab83950643]
	I0311 13:54:00.588616    4162 ssh_runner.go:195] Run: docker stop f6b733455fba 349023bbfc53 a86df61f6344 09bd8b44c2fe a420e877d727 01bc52299f67 3d1e6e29e227 e0557adcc012 e1e73f352dca e26f5a5073c5 596824c05237 a762e2759285 7d9686e6741b 3fefa81fe585 157cbdaae87c ffce710cd787 cf1bb3d2f610 727dfaa36be2 70d63e22f7c1 244719e096ed 61a7e06809da 51cbacae3c00 fc72f5571a18 caf168dad16c 382fc925268c 43bac95b5b3d e16d4264f4bc 221bf7f37fb8 960d0facd9a2 76b4de848c64 5364f6fbeff9 144c0aaa76d1 d31a11d50ab6 40469759e598 216bea9c9a75 85ab83950643
	I0311 13:54:10.724110    4162 ssh_runner.go:235] Completed: docker stop f6b733455fba 349023bbfc53 a86df61f6344 09bd8b44c2fe a420e877d727 01bc52299f67 3d1e6e29e227 e0557adcc012 e1e73f352dca e26f5a5073c5 596824c05237 a762e2759285 7d9686e6741b 3fefa81fe585 157cbdaae87c ffce710cd787 cf1bb3d2f610 727dfaa36be2 70d63e22f7c1 244719e096ed 61a7e06809da 51cbacae3c00 fc72f5571a18 caf168dad16c 382fc925268c 43bac95b5b3d e16d4264f4bc 221bf7f37fb8 960d0facd9a2 76b4de848c64 5364f6fbeff9 144c0aaa76d1 d31a11d50ab6 40469759e598 216bea9c9a75 85ab83950643: (10.135797667s)
	I0311 13:54:10.724214    4162 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 13:54:10.826744    4162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 13:54:10.830482    4162 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 Mar 11 20:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Mar 11 20:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Mar 11 20:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Mar 11 20:53 /etc/kubernetes/scheduler.conf
	
	I0311 13:54:10.830519    4162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50368 /etc/kubernetes/admin.conf
	I0311 13:54:10.833591    4162 kubeadm.go:162] "https://control-plane.minikube.internal:50368" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50368 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0311 13:54:10.833628    4162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 13:54:10.836461    4162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50368 /etc/kubernetes/kubelet.conf
	I0311 13:54:10.839000    4162 kubeadm.go:162] "https://control-plane.minikube.internal:50368" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50368 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0311 13:54:10.839022    4162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 13:54:10.842251    4162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50368 /etc/kubernetes/controller-manager.conf
	I0311 13:54:10.845453    4162 kubeadm.go:162] "https://control-plane.minikube.internal:50368" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50368 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0311 13:54:10.845477    4162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 13:54:10.848814    4162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50368 /etc/kubernetes/scheduler.conf
	I0311 13:54:10.851242    4162 kubeadm.go:162] "https://control-plane.minikube.internal:50368" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50368 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0311 13:54:10.851267    4162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 13:54:10.854126    4162 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 13:54:10.856872    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 13:54:10.877474    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 13:54:11.160842    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 13:54:11.404984    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 13:54:11.430608    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 13:54:11.454027    4162 api_server.go:52] waiting for apiserver process to appear ...
	I0311 13:54:11.454103    4162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 13:54:11.460698    4162 api_server.go:72] duration metric: took 6.671417ms to wait for apiserver process to appear ...
	I0311 13:54:11.460710    4162 api_server.go:88] waiting for apiserver healthz status ...
	I0311 13:54:11.460718    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:54:16.462667    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:54:16.462703    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:54:21.462786    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:54:21.462828    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:54:26.463015    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:54:26.463043    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:54:31.463450    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:54:31.463535    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:54:36.464462    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:54:36.464504    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:54:41.465387    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:54:41.465481    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:54:46.467016    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:54:46.467041    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:54:51.468529    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:54:51.468612    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:54:56.470756    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:54:56.470785    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:55:01.473061    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:55:01.473138    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:55:06.473855    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:55:06.473961    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:55:11.474301    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:55:11.474496    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:55:11.494164    4162 logs.go:276] 2 containers: [c646ff80a5b9 818456a37448]
	I0311 13:55:11.494283    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:55:11.509260    4162 logs.go:276] 2 containers: [c8a6f9e10281 a420e877d727]
	I0311 13:55:11.509334    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:55:11.521871    4162 logs.go:276] 2 containers: [02997ede6ee9 f6b733455fba]
	I0311 13:55:11.521958    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:55:11.538615    4162 logs.go:276] 2 containers: [b86cf6a9c6e7 3ed19e217722]
	I0311 13:55:11.538689    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:55:11.549620    4162 logs.go:276] 2 containers: [d9edcf2bd818 3d1e6e29e227]
	I0311 13:55:11.549693    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:55:11.561156    4162 logs.go:276] 2 containers: [81e691359feb a86df61f6344]
	I0311 13:55:11.561224    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:55:11.571293    4162 logs.go:276] 0 containers: []
	W0311 13:55:11.571305    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:55:11.571360    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:55:11.582589    4162 logs.go:276] 2 containers: [20d335c01a3b 349023bbfc53]
	I0311 13:55:11.582610    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:55:11.582616    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:55:11.627811    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:55:11.627833    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:55:11.730074    4162 logs.go:123] Gathering logs for kube-proxy [3d1e6e29e227] ...
	I0311 13:55:11.730087    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1e6e29e227"
	I0311 13:55:11.743650    4162 logs.go:123] Gathering logs for kube-controller-manager [81e691359feb] ...
	I0311 13:55:11.743662    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81e691359feb"
	I0311 13:55:11.761709    4162 logs.go:123] Gathering logs for kube-controller-manager [a86df61f6344] ...
	I0311 13:55:11.761724    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a86df61f6344"
	I0311 13:55:11.772739    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:55:11.772750    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:55:11.786316    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:55:11.786328    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:55:11.813396    4162 logs.go:123] Gathering logs for kube-apiserver [818456a37448] ...
	I0311 13:55:11.813410    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 818456a37448"
	I0311 13:55:11.832861    4162 logs.go:123] Gathering logs for etcd [a420e877d727] ...
	I0311 13:55:11.832874    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a420e877d727"
	I0311 13:55:11.849783    4162 logs.go:123] Gathering logs for coredns [02997ede6ee9] ...
	I0311 13:55:11.849793    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02997ede6ee9"
	I0311 13:55:11.863966    4162 logs.go:123] Gathering logs for coredns [f6b733455fba] ...
	I0311 13:55:11.863977    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b733455fba"
	I0311 13:55:11.875343    4162 logs.go:123] Gathering logs for kube-scheduler [3ed19e217722] ...
	I0311 13:55:11.875353    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed19e217722"
	I0311 13:55:11.887073    4162 logs.go:123] Gathering logs for kube-proxy [d9edcf2bd818] ...
	I0311 13:55:11.887086    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9edcf2bd818"
	I0311 13:55:11.899149    4162 logs.go:123] Gathering logs for storage-provisioner [20d335c01a3b] ...
	I0311 13:55:11.899158    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20d335c01a3b"
	I0311 13:55:11.911279    4162 logs.go:123] Gathering logs for storage-provisioner [349023bbfc53] ...
	I0311 13:55:11.911289    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349023bbfc53"
	I0311 13:55:11.922610    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:55:11.922622    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:55:11.927170    4162 logs.go:123] Gathering logs for kube-apiserver [c646ff80a5b9] ...
	I0311 13:55:11.927178    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c646ff80a5b9"
	I0311 13:55:11.941351    4162 logs.go:123] Gathering logs for etcd [c8a6f9e10281] ...
	I0311 13:55:11.941361    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a6f9e10281"
	I0311 13:55:11.955888    4162 logs.go:123] Gathering logs for kube-scheduler [b86cf6a9c6e7] ...
	I0311 13:55:11.955897    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86cf6a9c6e7"
	I0311 13:55:14.471658    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:55:19.473721    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:55:19.474072    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:55:19.511784    4162 logs.go:276] 2 containers: [c646ff80a5b9 818456a37448]
	I0311 13:55:19.511922    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:55:19.529934    4162 logs.go:276] 2 containers: [c8a6f9e10281 a420e877d727]
	I0311 13:55:19.530046    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:55:19.543637    4162 logs.go:276] 2 containers: [02997ede6ee9 f6b733455fba]
	I0311 13:55:19.543710    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:55:19.556295    4162 logs.go:276] 2 containers: [b86cf6a9c6e7 3ed19e217722]
	I0311 13:55:19.556378    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:55:19.567331    4162 logs.go:276] 2 containers: [d9edcf2bd818 3d1e6e29e227]
	I0311 13:55:19.567407    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:55:19.577454    4162 logs.go:276] 2 containers: [81e691359feb a86df61f6344]
	I0311 13:55:19.577517    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:55:19.588268    4162 logs.go:276] 0 containers: []
	W0311 13:55:19.588284    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:55:19.588350    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:55:19.598816    4162 logs.go:276] 2 containers: [20d335c01a3b 349023bbfc53]
	I0311 13:55:19.598830    4162 logs.go:123] Gathering logs for storage-provisioner [20d335c01a3b] ...
	I0311 13:55:19.598835    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20d335c01a3b"
	I0311 13:55:19.610047    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:55:19.610056    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:55:19.651199    4162 logs.go:123] Gathering logs for kube-apiserver [818456a37448] ...
	I0311 13:55:19.651212    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 818456a37448"
	I0311 13:55:19.669953    4162 logs.go:123] Gathering logs for etcd [a420e877d727] ...
	I0311 13:55:19.669964    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a420e877d727"
	I0311 13:55:19.682929    4162 logs.go:123] Gathering logs for kube-scheduler [b86cf6a9c6e7] ...
	I0311 13:55:19.682938    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86cf6a9c6e7"
	I0311 13:55:19.694613    4162 logs.go:123] Gathering logs for kube-scheduler [3ed19e217722] ...
	I0311 13:55:19.694625    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed19e217722"
	I0311 13:55:19.706343    4162 logs.go:123] Gathering logs for kube-proxy [d9edcf2bd818] ...
	I0311 13:55:19.706354    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9edcf2bd818"
	I0311 13:55:19.718046    4162 logs.go:123] Gathering logs for kube-proxy [3d1e6e29e227] ...
	I0311 13:55:19.718056    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1e6e29e227"
	I0311 13:55:19.729120    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:55:19.729138    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:55:19.766818    4162 logs.go:123] Gathering logs for coredns [02997ede6ee9] ...
	I0311 13:55:19.766836    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02997ede6ee9"
	I0311 13:55:19.781980    4162 logs.go:123] Gathering logs for kube-controller-manager [81e691359feb] ...
	I0311 13:55:19.781993    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81e691359feb"
	I0311 13:55:19.800690    4162 logs.go:123] Gathering logs for kube-controller-manager [a86df61f6344] ...
	I0311 13:55:19.800701    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a86df61f6344"
	I0311 13:55:19.812629    4162 logs.go:123] Gathering logs for storage-provisioner [349023bbfc53] ...
	I0311 13:55:19.812643    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349023bbfc53"
	I0311 13:55:19.825482    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:55:19.825496    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:55:19.838444    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:55:19.838458    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:55:19.843803    4162 logs.go:123] Gathering logs for kube-apiserver [c646ff80a5b9] ...
	I0311 13:55:19.843812    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c646ff80a5b9"
	I0311 13:55:19.857573    4162 logs.go:123] Gathering logs for etcd [c8a6f9e10281] ...
	I0311 13:55:19.857584    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a6f9e10281"
	I0311 13:55:19.871131    4162 logs.go:123] Gathering logs for coredns [f6b733455fba] ...
	I0311 13:55:19.871140    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b733455fba"
	I0311 13:55:19.883474    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:55:19.883484    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:55:22.412557    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:55:27.414794    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:55:27.415038    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:55:27.441800    4162 logs.go:276] 2 containers: [c646ff80a5b9 818456a37448]
	I0311 13:55:27.441923    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:55:27.458466    4162 logs.go:276] 2 containers: [c8a6f9e10281 a420e877d727]
	I0311 13:55:27.458548    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:55:27.471773    4162 logs.go:276] 2 containers: [02997ede6ee9 f6b733455fba]
	I0311 13:55:27.471854    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:55:27.487537    4162 logs.go:276] 2 containers: [b86cf6a9c6e7 3ed19e217722]
	I0311 13:55:27.487609    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:55:27.497860    4162 logs.go:276] 2 containers: [d9edcf2bd818 3d1e6e29e227]
	I0311 13:55:27.497932    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:55:27.510108    4162 logs.go:276] 2 containers: [81e691359feb a86df61f6344]
	I0311 13:55:27.510179    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:55:27.520364    4162 logs.go:276] 0 containers: []
	W0311 13:55:27.520374    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:55:27.520429    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:55:27.530969    4162 logs.go:276] 2 containers: [20d335c01a3b 349023bbfc53]
	I0311 13:55:27.530983    4162 logs.go:123] Gathering logs for coredns [f6b733455fba] ...
	I0311 13:55:27.530990    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b733455fba"
	I0311 13:55:27.542644    4162 logs.go:123] Gathering logs for storage-provisioner [349023bbfc53] ...
	I0311 13:55:27.542655    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349023bbfc53"
	I0311 13:55:27.553775    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:55:27.553786    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:55:27.594132    4162 logs.go:123] Gathering logs for etcd [c8a6f9e10281] ...
	I0311 13:55:27.594141    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a6f9e10281"
	I0311 13:55:27.607363    4162 logs.go:123] Gathering logs for kube-scheduler [3ed19e217722] ...
	I0311 13:55:27.607373    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed19e217722"
	I0311 13:55:27.624695    4162 logs.go:123] Gathering logs for kube-proxy [d9edcf2bd818] ...
	I0311 13:55:27.624708    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9edcf2bd818"
	I0311 13:55:27.636655    4162 logs.go:123] Gathering logs for kube-controller-manager [a86df61f6344] ...
	I0311 13:55:27.636668    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a86df61f6344"
	I0311 13:55:27.648088    4162 logs.go:123] Gathering logs for storage-provisioner [20d335c01a3b] ...
	I0311 13:55:27.648105    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20d335c01a3b"
	I0311 13:55:27.659295    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:55:27.659306    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:55:27.684447    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:55:27.684458    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:55:27.697225    4162 logs.go:123] Gathering logs for kube-apiserver [818456a37448] ...
	I0311 13:55:27.697236    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 818456a37448"
	I0311 13:55:27.717018    4162 logs.go:123] Gathering logs for kube-scheduler [b86cf6a9c6e7] ...
	I0311 13:55:27.717030    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86cf6a9c6e7"
	I0311 13:55:27.728633    4162 logs.go:123] Gathering logs for kube-proxy [3d1e6e29e227] ...
	I0311 13:55:27.728652    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1e6e29e227"
	I0311 13:55:27.739920    4162 logs.go:123] Gathering logs for kube-apiserver [c646ff80a5b9] ...
	I0311 13:55:27.739933    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c646ff80a5b9"
	I0311 13:55:27.753793    4162 logs.go:123] Gathering logs for etcd [a420e877d727] ...
	I0311 13:55:27.753804    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a420e877d727"
	I0311 13:55:27.792562    4162 logs.go:123] Gathering logs for coredns [02997ede6ee9] ...
	I0311 13:55:27.792572    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02997ede6ee9"
	I0311 13:55:27.804654    4162 logs.go:123] Gathering logs for kube-controller-manager [81e691359feb] ...
	I0311 13:55:27.804665    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81e691359feb"
	I0311 13:55:27.822105    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:55:27.822117    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:55:27.826394    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:55:27.826399    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:55:30.366439    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:55:35.368693    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:55:35.368824    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:55:35.381556    4162 logs.go:276] 2 containers: [c646ff80a5b9 818456a37448]
	I0311 13:55:35.381633    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:55:35.392103    4162 logs.go:276] 2 containers: [c8a6f9e10281 a420e877d727]
	I0311 13:55:35.392178    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:55:35.403066    4162 logs.go:276] 2 containers: [02997ede6ee9 f6b733455fba]
	I0311 13:55:35.403146    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:55:35.413595    4162 logs.go:276] 2 containers: [b86cf6a9c6e7 3ed19e217722]
	I0311 13:55:35.413689    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:55:35.424103    4162 logs.go:276] 2 containers: [d9edcf2bd818 3d1e6e29e227]
	I0311 13:55:35.424183    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:55:35.434950    4162 logs.go:276] 2 containers: [81e691359feb a86df61f6344]
	I0311 13:55:35.435033    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:55:35.445131    4162 logs.go:276] 0 containers: []
	W0311 13:55:35.445142    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:55:35.445203    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:55:35.455578    4162 logs.go:276] 2 containers: [20d335c01a3b 349023bbfc53]
	I0311 13:55:35.455596    4162 logs.go:123] Gathering logs for coredns [f6b733455fba] ...
	I0311 13:55:35.455602    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b733455fba"
	I0311 13:55:35.467245    4162 logs.go:123] Gathering logs for kube-scheduler [b86cf6a9c6e7] ...
	I0311 13:55:35.467254    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86cf6a9c6e7"
	I0311 13:55:35.478717    4162 logs.go:123] Gathering logs for kube-scheduler [3ed19e217722] ...
	I0311 13:55:35.478728    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed19e217722"
	I0311 13:55:35.490472    4162 logs.go:123] Gathering logs for kube-proxy [3d1e6e29e227] ...
	I0311 13:55:35.490486    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1e6e29e227"
	I0311 13:55:35.501925    4162 logs.go:123] Gathering logs for storage-provisioner [20d335c01a3b] ...
	I0311 13:55:35.501936    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20d335c01a3b"
	I0311 13:55:35.518965    4162 logs.go:123] Gathering logs for storage-provisioner [349023bbfc53] ...
	I0311 13:55:35.518979    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349023bbfc53"
	I0311 13:55:35.529589    4162 logs.go:123] Gathering logs for kube-controller-manager [a86df61f6344] ...
	I0311 13:55:35.529603    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a86df61f6344"
	I0311 13:55:35.541665    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:55:35.541677    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:55:35.553699    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:55:35.553711    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:55:35.590954    4162 logs.go:123] Gathering logs for kube-apiserver [c646ff80a5b9] ...
	I0311 13:55:35.590964    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c646ff80a5b9"
	I0311 13:55:35.604775    4162 logs.go:123] Gathering logs for kube-apiserver [818456a37448] ...
	I0311 13:55:35.604787    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 818456a37448"
	I0311 13:55:35.632095    4162 logs.go:123] Gathering logs for coredns [02997ede6ee9] ...
	I0311 13:55:35.632107    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02997ede6ee9"
	I0311 13:55:35.643518    4162 logs.go:123] Gathering logs for kube-proxy [d9edcf2bd818] ...
	I0311 13:55:35.643531    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9edcf2bd818"
	I0311 13:55:35.656093    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:55:35.656105    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:55:35.680788    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:55:35.680796    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:55:35.719907    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:55:35.719920    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:55:35.724584    4162 logs.go:123] Gathering logs for etcd [c8a6f9e10281] ...
	I0311 13:55:35.724591    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a6f9e10281"
	I0311 13:55:35.738034    4162 logs.go:123] Gathering logs for etcd [a420e877d727] ...
	I0311 13:55:35.738045    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a420e877d727"
	I0311 13:55:35.750906    4162 logs.go:123] Gathering logs for kube-controller-manager [81e691359feb] ...
	I0311 13:55:35.750915    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81e691359feb"
	I0311 13:55:38.270074    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:55:43.272334    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:55:43.272604    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:55:43.297373    4162 logs.go:276] 2 containers: [c646ff80a5b9 818456a37448]
	I0311 13:55:43.297499    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:55:43.313460    4162 logs.go:276] 2 containers: [c8a6f9e10281 a420e877d727]
	I0311 13:55:43.313545    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:55:43.325749    4162 logs.go:276] 2 containers: [02997ede6ee9 f6b733455fba]
	I0311 13:55:43.325819    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:55:43.337128    4162 logs.go:276] 2 containers: [b86cf6a9c6e7 3ed19e217722]
	I0311 13:55:43.337201    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:55:43.347762    4162 logs.go:276] 2 containers: [d9edcf2bd818 3d1e6e29e227]
	I0311 13:55:43.347836    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:55:43.358540    4162 logs.go:276] 2 containers: [81e691359feb a86df61f6344]
	I0311 13:55:43.358613    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:55:43.368227    4162 logs.go:276] 0 containers: []
	W0311 13:55:43.368242    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:55:43.368292    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:55:43.378855    4162 logs.go:276] 2 containers: [20d335c01a3b 349023bbfc53]
	I0311 13:55:43.378870    4162 logs.go:123] Gathering logs for coredns [f6b733455fba] ...
	I0311 13:55:43.378876    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b733455fba"
	I0311 13:55:43.390253    4162 logs.go:123] Gathering logs for kube-scheduler [3ed19e217722] ...
	I0311 13:55:43.390265    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed19e217722"
	I0311 13:55:43.401571    4162 logs.go:123] Gathering logs for kube-proxy [d9edcf2bd818] ...
	I0311 13:55:43.401584    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9edcf2bd818"
	I0311 13:55:43.413352    4162 logs.go:123] Gathering logs for storage-provisioner [20d335c01a3b] ...
	I0311 13:55:43.413362    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20d335c01a3b"
	I0311 13:55:43.424748    4162 logs.go:123] Gathering logs for coredns [02997ede6ee9] ...
	I0311 13:55:43.424758    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02997ede6ee9"
	I0311 13:55:43.435648    4162 logs.go:123] Gathering logs for kube-proxy [3d1e6e29e227] ...
	I0311 13:55:43.435658    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1e6e29e227"
	I0311 13:55:43.446868    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:55:43.446878    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:55:43.471709    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:55:43.471716    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:55:43.483817    4162 logs.go:123] Gathering logs for kube-controller-manager [a86df61f6344] ...
	I0311 13:55:43.483829    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a86df61f6344"
	I0311 13:55:43.495049    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:55:43.495065    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:55:43.535379    4162 logs.go:123] Gathering logs for kube-apiserver [818456a37448] ...
	I0311 13:55:43.535393    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 818456a37448"
	I0311 13:55:43.554510    4162 logs.go:123] Gathering logs for etcd [c8a6f9e10281] ...
	I0311 13:55:43.554520    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a6f9e10281"
	I0311 13:55:43.568145    4162 logs.go:123] Gathering logs for etcd [a420e877d727] ...
	I0311 13:55:43.568155    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a420e877d727"
	I0311 13:55:43.581530    4162 logs.go:123] Gathering logs for kube-scheduler [b86cf6a9c6e7] ...
	I0311 13:55:43.581540    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86cf6a9c6e7"
	I0311 13:55:43.592993    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:55:43.593004    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:55:43.631605    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:55:43.631613    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:55:43.635736    4162 logs.go:123] Gathering logs for kube-apiserver [c646ff80a5b9] ...
	I0311 13:55:43.635744    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c646ff80a5b9"
	I0311 13:55:43.650132    4162 logs.go:123] Gathering logs for kube-controller-manager [81e691359feb] ...
	I0311 13:55:43.650144    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81e691359feb"
	I0311 13:55:43.672922    4162 logs.go:123] Gathering logs for storage-provisioner [349023bbfc53] ...
	I0311 13:55:43.672933    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349023bbfc53"
	I0311 13:55:46.186202    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:55:51.188359    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:55:51.188480    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:55:51.208402    4162 logs.go:276] 2 containers: [c646ff80a5b9 818456a37448]
	I0311 13:55:51.208481    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:55:51.219916    4162 logs.go:276] 2 containers: [c8a6f9e10281 a420e877d727]
	I0311 13:55:51.219992    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:55:51.230533    4162 logs.go:276] 2 containers: [02997ede6ee9 f6b733455fba]
	I0311 13:55:51.230612    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:55:51.240883    4162 logs.go:276] 2 containers: [b86cf6a9c6e7 3ed19e217722]
	I0311 13:55:51.240951    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:55:51.251238    4162 logs.go:276] 2 containers: [d9edcf2bd818 3d1e6e29e227]
	I0311 13:55:51.251310    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:55:51.262369    4162 logs.go:276] 2 containers: [81e691359feb a86df61f6344]
	I0311 13:55:51.262443    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:55:51.272347    4162 logs.go:276] 0 containers: []
	W0311 13:55:51.272358    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:55:51.272418    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:55:51.282998    4162 logs.go:276] 2 containers: [20d335c01a3b 349023bbfc53]
	I0311 13:55:51.283011    4162 logs.go:123] Gathering logs for kube-apiserver [818456a37448] ...
	I0311 13:55:51.283018    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 818456a37448"
	I0311 13:55:51.303197    4162 logs.go:123] Gathering logs for kube-controller-manager [a86df61f6344] ...
	I0311 13:55:51.303207    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a86df61f6344"
	I0311 13:55:51.314303    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:55:51.314314    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:55:51.339332    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:55:51.339344    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:55:51.346052    4162 logs.go:123] Gathering logs for kube-proxy [d9edcf2bd818] ...
	I0311 13:55:51.346065    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9edcf2bd818"
	I0311 13:55:51.357380    4162 logs.go:123] Gathering logs for storage-provisioner [349023bbfc53] ...
	I0311 13:55:51.357392    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349023bbfc53"
	I0311 13:55:51.368304    4162 logs.go:123] Gathering logs for coredns [f6b733455fba] ...
	I0311 13:55:51.368315    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b733455fba"
	I0311 13:55:51.382069    4162 logs.go:123] Gathering logs for etcd [c8a6f9e10281] ...
	I0311 13:55:51.382082    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a6f9e10281"
	I0311 13:55:51.395982    4162 logs.go:123] Gathering logs for coredns [02997ede6ee9] ...
	I0311 13:55:51.395991    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02997ede6ee9"
	I0311 13:55:51.407120    4162 logs.go:123] Gathering logs for kube-controller-manager [81e691359feb] ...
	I0311 13:55:51.407131    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81e691359feb"
	I0311 13:55:51.423837    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:55:51.423849    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:55:51.435603    4162 logs.go:123] Gathering logs for kube-apiserver [c646ff80a5b9] ...
	I0311 13:55:51.435617    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c646ff80a5b9"
	I0311 13:55:51.453909    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:55:51.453919    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:55:51.489816    4162 logs.go:123] Gathering logs for etcd [a420e877d727] ...
	I0311 13:55:51.489829    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a420e877d727"
	I0311 13:55:51.503528    4162 logs.go:123] Gathering logs for kube-scheduler [b86cf6a9c6e7] ...
	I0311 13:55:51.503541    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86cf6a9c6e7"
	I0311 13:55:51.515376    4162 logs.go:123] Gathering logs for kube-scheduler [3ed19e217722] ...
	I0311 13:55:51.515388    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed19e217722"
	I0311 13:55:51.527796    4162 logs.go:123] Gathering logs for kube-proxy [3d1e6e29e227] ...
	I0311 13:55:51.527806    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1e6e29e227"
	I0311 13:55:51.538789    4162 logs.go:123] Gathering logs for storage-provisioner [20d335c01a3b] ...
	I0311 13:55:51.538800    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20d335c01a3b"
	I0311 13:55:51.550091    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:55:51.550102    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:55:54.093327    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:55:59.095671    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:55:59.095833    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:55:59.111769    4162 logs.go:276] 2 containers: [c646ff80a5b9 818456a37448]
	I0311 13:55:59.111846    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:55:59.123090    4162 logs.go:276] 2 containers: [c8a6f9e10281 a420e877d727]
	I0311 13:55:59.123166    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:55:59.133605    4162 logs.go:276] 2 containers: [02997ede6ee9 f6b733455fba]
	I0311 13:55:59.133690    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:55:59.143883    4162 logs.go:276] 2 containers: [b86cf6a9c6e7 3ed19e217722]
	I0311 13:55:59.143956    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:55:59.154023    4162 logs.go:276] 2 containers: [d9edcf2bd818 3d1e6e29e227]
	I0311 13:55:59.154093    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:55:59.164339    4162 logs.go:276] 2 containers: [81e691359feb a86df61f6344]
	I0311 13:55:59.164399    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:55:59.174538    4162 logs.go:276] 0 containers: []
	W0311 13:55:59.174551    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:55:59.174607    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:55:59.185079    4162 logs.go:276] 2 containers: [20d335c01a3b 349023bbfc53]
	I0311 13:55:59.185095    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:55:59.185100    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:55:59.189500    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:55:59.189511    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:55:59.224656    4162 logs.go:123] Gathering logs for kube-scheduler [3ed19e217722] ...
	I0311 13:55:59.224668    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed19e217722"
	I0311 13:55:59.236854    4162 logs.go:123] Gathering logs for storage-provisioner [20d335c01a3b] ...
	I0311 13:55:59.236867    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20d335c01a3b"
	I0311 13:55:59.248060    4162 logs.go:123] Gathering logs for kube-apiserver [818456a37448] ...
	I0311 13:55:59.248071    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 818456a37448"
	I0311 13:55:59.267687    4162 logs.go:123] Gathering logs for etcd [c8a6f9e10281] ...
	I0311 13:55:59.267699    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a6f9e10281"
	I0311 13:55:59.286259    4162 logs.go:123] Gathering logs for kube-scheduler [b86cf6a9c6e7] ...
	I0311 13:55:59.286270    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86cf6a9c6e7"
	I0311 13:55:59.298462    4162 logs.go:123] Gathering logs for kube-proxy [d9edcf2bd818] ...
	I0311 13:55:59.298473    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9edcf2bd818"
	I0311 13:55:59.309995    4162 logs.go:123] Gathering logs for storage-provisioner [349023bbfc53] ...
	I0311 13:55:59.310011    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349023bbfc53"
	I0311 13:55:59.321156    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:55:59.321168    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:55:59.361260    4162 logs.go:123] Gathering logs for coredns [02997ede6ee9] ...
	I0311 13:55:59.361268    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02997ede6ee9"
	I0311 13:55:59.372229    4162 logs.go:123] Gathering logs for coredns [f6b733455fba] ...
	I0311 13:55:59.375961    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b733455fba"
	I0311 13:55:59.387240    4162 logs.go:123] Gathering logs for kube-controller-manager [81e691359feb] ...
	I0311 13:55:59.387251    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81e691359feb"
	I0311 13:55:59.406747    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:55:59.406759    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:55:59.434035    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:55:59.434053    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:55:59.446507    4162 logs.go:123] Gathering logs for kube-apiserver [c646ff80a5b9] ...
	I0311 13:55:59.446519    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c646ff80a5b9"
	I0311 13:55:59.460648    4162 logs.go:123] Gathering logs for etcd [a420e877d727] ...
	I0311 13:55:59.460658    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a420e877d727"
	I0311 13:55:59.473877    4162 logs.go:123] Gathering logs for kube-proxy [3d1e6e29e227] ...
	I0311 13:55:59.473888    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1e6e29e227"
	I0311 13:55:59.485942    4162 logs.go:123] Gathering logs for kube-controller-manager [a86df61f6344] ...
	I0311 13:55:59.485955    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a86df61f6344"
	I0311 13:56:02.003029    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:56:07.005177    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:56:07.005310    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:56:07.019095    4162 logs.go:276] 2 containers: [c646ff80a5b9 818456a37448]
	I0311 13:56:07.019179    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:56:07.036757    4162 logs.go:276] 2 containers: [c8a6f9e10281 a420e877d727]
	I0311 13:56:07.036823    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:56:07.047032    4162 logs.go:276] 2 containers: [02997ede6ee9 f6b733455fba]
	I0311 13:56:07.047105    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:56:07.057440    4162 logs.go:276] 2 containers: [b86cf6a9c6e7 3ed19e217722]
	I0311 13:56:07.057515    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:56:07.067777    4162 logs.go:276] 2 containers: [d9edcf2bd818 3d1e6e29e227]
	I0311 13:56:07.067839    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:56:07.078456    4162 logs.go:276] 2 containers: [81e691359feb a86df61f6344]
	I0311 13:56:07.078522    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:56:07.088734    4162 logs.go:276] 0 containers: []
	W0311 13:56:07.088752    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:56:07.088828    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:56:07.099251    4162 logs.go:276] 2 containers: [20d335c01a3b 349023bbfc53]
	I0311 13:56:07.099266    4162 logs.go:123] Gathering logs for kube-apiserver [818456a37448] ...
	I0311 13:56:07.099271    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 818456a37448"
	I0311 13:56:07.118390    4162 logs.go:123] Gathering logs for etcd [c8a6f9e10281] ...
	I0311 13:56:07.118400    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a6f9e10281"
	I0311 13:56:07.131915    4162 logs.go:123] Gathering logs for etcd [a420e877d727] ...
	I0311 13:56:07.131926    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a420e877d727"
	I0311 13:56:07.145311    4162 logs.go:123] Gathering logs for kube-proxy [d9edcf2bd818] ...
	I0311 13:56:07.145325    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9edcf2bd818"
	I0311 13:56:07.157162    4162 logs.go:123] Gathering logs for kube-controller-manager [81e691359feb] ...
	I0311 13:56:07.157172    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81e691359feb"
	I0311 13:56:07.173737    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:56:07.173747    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:56:07.214596    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:56:07.214607    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:56:07.219812    4162 logs.go:123] Gathering logs for kube-apiserver [c646ff80a5b9] ...
	I0311 13:56:07.219819    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c646ff80a5b9"
	I0311 13:56:07.233789    4162 logs.go:123] Gathering logs for kube-controller-manager [a86df61f6344] ...
	I0311 13:56:07.233802    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a86df61f6344"
	I0311 13:56:07.245251    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:56:07.245263    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:56:07.269223    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:56:07.269230    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:56:07.304110    4162 logs.go:123] Gathering logs for coredns [f6b733455fba] ...
	I0311 13:56:07.304120    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b733455fba"
	I0311 13:56:07.317760    4162 logs.go:123] Gathering logs for kube-proxy [3d1e6e29e227] ...
	I0311 13:56:07.317771    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1e6e29e227"
	I0311 13:56:07.328800    4162 logs.go:123] Gathering logs for kube-scheduler [b86cf6a9c6e7] ...
	I0311 13:56:07.328818    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86cf6a9c6e7"
	I0311 13:56:07.340489    4162 logs.go:123] Gathering logs for storage-provisioner [349023bbfc53] ...
	I0311 13:56:07.340502    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349023bbfc53"
	I0311 13:56:07.351680    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:56:07.351693    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:56:07.364091    4162 logs.go:123] Gathering logs for coredns [02997ede6ee9] ...
	I0311 13:56:07.364103    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02997ede6ee9"
	I0311 13:56:07.375576    4162 logs.go:123] Gathering logs for kube-scheduler [3ed19e217722] ...
	I0311 13:56:07.375586    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed19e217722"
	I0311 13:56:07.387578    4162 logs.go:123] Gathering logs for storage-provisioner [20d335c01a3b] ...
	I0311 13:56:07.387593    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20d335c01a3b"
	I0311 13:56:09.901540    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:56:14.903589    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:56:14.903702    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:56:14.915010    4162 logs.go:276] 2 containers: [c646ff80a5b9 818456a37448]
	I0311 13:56:14.915098    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:56:14.925677    4162 logs.go:276] 2 containers: [c8a6f9e10281 a420e877d727]
	I0311 13:56:14.925744    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:56:14.936427    4162 logs.go:276] 2 containers: [02997ede6ee9 f6b733455fba]
	I0311 13:56:14.936503    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:56:14.948219    4162 logs.go:276] 2 containers: [b86cf6a9c6e7 3ed19e217722]
	I0311 13:56:14.948294    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:56:14.958951    4162 logs.go:276] 2 containers: [d9edcf2bd818 3d1e6e29e227]
	I0311 13:56:14.959020    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:56:14.969769    4162 logs.go:276] 2 containers: [81e691359feb a86df61f6344]
	I0311 13:56:14.969841    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:56:14.984659    4162 logs.go:276] 0 containers: []
	W0311 13:56:14.984672    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:56:14.984725    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:56:14.995182    4162 logs.go:276] 2 containers: [20d335c01a3b 349023bbfc53]
	I0311 13:56:14.995200    4162 logs.go:123] Gathering logs for storage-provisioner [20d335c01a3b] ...
	I0311 13:56:14.995205    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20d335c01a3b"
	I0311 13:56:15.007758    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:56:15.007771    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:56:15.012219    4162 logs.go:123] Gathering logs for etcd [c8a6f9e10281] ...
	I0311 13:56:15.012225    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a6f9e10281"
	I0311 13:56:15.026437    4162 logs.go:123] Gathering logs for kube-scheduler [b86cf6a9c6e7] ...
	I0311 13:56:15.026446    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86cf6a9c6e7"
	I0311 13:56:15.038792    4162 logs.go:123] Gathering logs for kube-proxy [d9edcf2bd818] ...
	I0311 13:56:15.038802    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9edcf2bd818"
	I0311 13:56:15.054654    4162 logs.go:123] Gathering logs for storage-provisioner [349023bbfc53] ...
	I0311 13:56:15.054665    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349023bbfc53"
	I0311 13:56:15.065640    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:56:15.065656    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:56:15.091161    4162 logs.go:123] Gathering logs for kube-apiserver [c646ff80a5b9] ...
	I0311 13:56:15.091168    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c646ff80a5b9"
	I0311 13:56:15.104849    4162 logs.go:123] Gathering logs for coredns [f6b733455fba] ...
	I0311 13:56:15.104860    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b733455fba"
	I0311 13:56:15.116606    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:56:15.116617    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:56:15.158012    4162 logs.go:123] Gathering logs for kube-controller-manager [81e691359feb] ...
	I0311 13:56:15.158021    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81e691359feb"
	I0311 13:56:15.179520    4162 logs.go:123] Gathering logs for etcd [a420e877d727] ...
	I0311 13:56:15.179530    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a420e877d727"
	I0311 13:56:15.192748    4162 logs.go:123] Gathering logs for coredns [02997ede6ee9] ...
	I0311 13:56:15.192758    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02997ede6ee9"
	I0311 13:56:15.203960    4162 logs.go:123] Gathering logs for kube-scheduler [3ed19e217722] ...
	I0311 13:56:15.203976    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed19e217722"
	I0311 13:56:15.215519    4162 logs.go:123] Gathering logs for kube-proxy [3d1e6e29e227] ...
	I0311 13:56:15.215530    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1e6e29e227"
	I0311 13:56:15.226847    4162 logs.go:123] Gathering logs for kube-controller-manager [a86df61f6344] ...
	I0311 13:56:15.226859    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a86df61f6344"
	I0311 13:56:15.238188    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:56:15.238200    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:56:15.250554    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:56:15.250566    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:56:15.289259    4162 logs.go:123] Gathering logs for kube-apiserver [818456a37448] ...
	I0311 13:56:15.289269    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 818456a37448"
	I0311 13:56:17.809566    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:56:22.811850    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:56:22.812056    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:56:22.836702    4162 logs.go:276] 2 containers: [c646ff80a5b9 818456a37448]
	I0311 13:56:22.836804    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:56:22.853831    4162 logs.go:276] 2 containers: [c8a6f9e10281 a420e877d727]
	I0311 13:56:22.853910    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:56:22.866904    4162 logs.go:276] 2 containers: [02997ede6ee9 f6b733455fba]
	I0311 13:56:22.866984    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:56:22.878717    4162 logs.go:276] 2 containers: [b86cf6a9c6e7 3ed19e217722]
	I0311 13:56:22.878779    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:56:22.888998    4162 logs.go:276] 2 containers: [d9edcf2bd818 3d1e6e29e227]
	I0311 13:56:22.889068    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:56:22.899492    4162 logs.go:276] 2 containers: [81e691359feb a86df61f6344]
	I0311 13:56:22.899568    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:56:22.909920    4162 logs.go:276] 0 containers: []
	W0311 13:56:22.909930    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:56:22.909983    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:56:22.921139    4162 logs.go:276] 2 containers: [20d335c01a3b 349023bbfc53]
	I0311 13:56:22.921155    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:56:22.921160    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:56:22.926116    4162 logs.go:123] Gathering logs for kube-apiserver [c646ff80a5b9] ...
	I0311 13:56:22.926123    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c646ff80a5b9"
	I0311 13:56:22.940346    4162 logs.go:123] Gathering logs for coredns [02997ede6ee9] ...
	I0311 13:56:22.940358    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02997ede6ee9"
	I0311 13:56:22.951760    4162 logs.go:123] Gathering logs for kube-proxy [d9edcf2bd818] ...
	I0311 13:56:22.951772    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9edcf2bd818"
	I0311 13:56:22.963778    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:56:22.963790    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:56:22.976292    4162 logs.go:123] Gathering logs for etcd [a420e877d727] ...
	I0311 13:56:22.976306    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a420e877d727"
	I0311 13:56:22.990107    4162 logs.go:123] Gathering logs for coredns [f6b733455fba] ...
	I0311 13:56:22.990117    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b733455fba"
	I0311 13:56:23.005391    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:56:23.005402    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:56:23.029239    4162 logs.go:123] Gathering logs for kube-controller-manager [81e691359feb] ...
	I0311 13:56:23.029250    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81e691359feb"
	I0311 13:56:23.046271    4162 logs.go:123] Gathering logs for storage-provisioner [20d335c01a3b] ...
	I0311 13:56:23.046283    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20d335c01a3b"
	I0311 13:56:23.057681    4162 logs.go:123] Gathering logs for storage-provisioner [349023bbfc53] ...
	I0311 13:56:23.057691    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349023bbfc53"
	I0311 13:56:23.068769    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:56:23.068780    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:56:23.103226    4162 logs.go:123] Gathering logs for etcd [c8a6f9e10281] ...
	I0311 13:56:23.103236    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a6f9e10281"
	I0311 13:56:23.117077    4162 logs.go:123] Gathering logs for kube-scheduler [b86cf6a9c6e7] ...
	I0311 13:56:23.117088    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86cf6a9c6e7"
	I0311 13:56:23.128875    4162 logs.go:123] Gathering logs for kube-scheduler [3ed19e217722] ...
	I0311 13:56:23.128885    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed19e217722"
	I0311 13:56:23.141439    4162 logs.go:123] Gathering logs for kube-proxy [3d1e6e29e227] ...
	I0311 13:56:23.141450    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1e6e29e227"
	I0311 13:56:23.153312    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:56:23.153323    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:56:23.194076    4162 logs.go:123] Gathering logs for kube-apiserver [818456a37448] ...
	I0311 13:56:23.194084    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 818456a37448"
	I0311 13:56:23.212708    4162 logs.go:123] Gathering logs for kube-controller-manager [a86df61f6344] ...
	I0311 13:56:23.212718    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a86df61f6344"
	I0311 13:56:25.724416    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:56:30.726501    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:56:30.726660    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:56:30.742396    4162 logs.go:276] 2 containers: [c646ff80a5b9 818456a37448]
	I0311 13:56:30.742467    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:56:30.754618    4162 logs.go:276] 2 containers: [c8a6f9e10281 a420e877d727]
	I0311 13:56:30.754682    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:56:30.765705    4162 logs.go:276] 2 containers: [02997ede6ee9 f6b733455fba]
	I0311 13:56:30.765783    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:56:30.776352    4162 logs.go:276] 2 containers: [b86cf6a9c6e7 3ed19e217722]
	I0311 13:56:30.776427    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:56:30.787072    4162 logs.go:276] 2 containers: [d9edcf2bd818 3d1e6e29e227]
	I0311 13:56:30.787139    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:56:30.798110    4162 logs.go:276] 2 containers: [81e691359feb a86df61f6344]
	I0311 13:56:30.798180    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:56:30.809135    4162 logs.go:276] 0 containers: []
	W0311 13:56:30.809149    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:56:30.809208    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:56:30.820497    4162 logs.go:276] 2 containers: [20d335c01a3b 349023bbfc53]
	I0311 13:56:30.820513    4162 logs.go:123] Gathering logs for kube-apiserver [818456a37448] ...
	I0311 13:56:30.820518    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 818456a37448"
	I0311 13:56:30.839464    4162 logs.go:123] Gathering logs for etcd [c8a6f9e10281] ...
	I0311 13:56:30.839474    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a6f9e10281"
	I0311 13:56:30.854176    4162 logs.go:123] Gathering logs for coredns [02997ede6ee9] ...
	I0311 13:56:30.854187    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02997ede6ee9"
	I0311 13:56:30.865080    4162 logs.go:123] Gathering logs for kube-apiserver [c646ff80a5b9] ...
	I0311 13:56:30.865091    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c646ff80a5b9"
	I0311 13:56:30.878828    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:56:30.878839    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:56:30.883490    4162 logs.go:123] Gathering logs for etcd [a420e877d727] ...
	I0311 13:56:30.883497    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a420e877d727"
	I0311 13:56:30.899548    4162 logs.go:123] Gathering logs for kube-controller-manager [81e691359feb] ...
	I0311 13:56:30.899560    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81e691359feb"
	I0311 13:56:30.917048    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:56:30.917058    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:56:30.956289    4162 logs.go:123] Gathering logs for kube-scheduler [3ed19e217722] ...
	I0311 13:56:30.956297    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed19e217722"
	I0311 13:56:30.972528    4162 logs.go:123] Gathering logs for kube-controller-manager [a86df61f6344] ...
	I0311 13:56:30.972540    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a86df61f6344"
	I0311 13:56:30.984354    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:56:30.984369    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:56:31.009695    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:56:31.009706    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:56:31.023267    4162 logs.go:123] Gathering logs for coredns [f6b733455fba] ...
	I0311 13:56:31.023278    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b733455fba"
	I0311 13:56:31.034837    4162 logs.go:123] Gathering logs for kube-scheduler [b86cf6a9c6e7] ...
	I0311 13:56:31.034848    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86cf6a9c6e7"
	I0311 13:56:31.046506    4162 logs.go:123] Gathering logs for kube-proxy [d9edcf2bd818] ...
	I0311 13:56:31.046518    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9edcf2bd818"
	I0311 13:56:31.058387    4162 logs.go:123] Gathering logs for kube-proxy [3d1e6e29e227] ...
	I0311 13:56:31.058398    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1e6e29e227"
	I0311 13:56:31.069360    4162 logs.go:123] Gathering logs for storage-provisioner [20d335c01a3b] ...
	I0311 13:56:31.069371    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20d335c01a3b"
	I0311 13:56:31.080925    4162 logs.go:123] Gathering logs for storage-provisioner [349023bbfc53] ...
	I0311 13:56:31.080935    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349023bbfc53"
	I0311 13:56:31.092050    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:56:31.092061    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:56:33.631580    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:56:38.634025    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:56:38.634137    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:56:38.645796    4162 logs.go:276] 2 containers: [c646ff80a5b9 818456a37448]
	I0311 13:56:38.645861    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:56:38.655904    4162 logs.go:276] 2 containers: [c8a6f9e10281 a420e877d727]
	I0311 13:56:38.655978    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:56:38.665869    4162 logs.go:276] 2 containers: [02997ede6ee9 f6b733455fba]
	I0311 13:56:38.665934    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:56:38.677827    4162 logs.go:276] 2 containers: [b86cf6a9c6e7 3ed19e217722]
	I0311 13:56:38.677903    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:56:38.688620    4162 logs.go:276] 2 containers: [d9edcf2bd818 3d1e6e29e227]
	I0311 13:56:38.688696    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:56:38.701691    4162 logs.go:276] 2 containers: [81e691359feb a86df61f6344]
	I0311 13:56:38.701764    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:56:38.712021    4162 logs.go:276] 0 containers: []
	W0311 13:56:38.712036    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:56:38.712091    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:56:38.722771    4162 logs.go:276] 2 containers: [20d335c01a3b 349023bbfc53]
	I0311 13:56:38.722788    4162 logs.go:123] Gathering logs for storage-provisioner [20d335c01a3b] ...
	I0311 13:56:38.722793    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20d335c01a3b"
	I0311 13:56:38.734552    4162 logs.go:123] Gathering logs for kube-apiserver [c646ff80a5b9] ...
	I0311 13:56:38.734565    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c646ff80a5b9"
	I0311 13:56:38.748991    4162 logs.go:123] Gathering logs for kube-apiserver [818456a37448] ...
	I0311 13:56:38.749001    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 818456a37448"
	I0311 13:56:38.767065    4162 logs.go:123] Gathering logs for coredns [02997ede6ee9] ...
	I0311 13:56:38.767076    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02997ede6ee9"
	I0311 13:56:38.784247    4162 logs.go:123] Gathering logs for kube-scheduler [b86cf6a9c6e7] ...
	I0311 13:56:38.784258    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86cf6a9c6e7"
	I0311 13:56:38.796442    4162 logs.go:123] Gathering logs for kube-proxy [d9edcf2bd818] ...
	I0311 13:56:38.796454    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9edcf2bd818"
	I0311 13:56:38.808207    4162 logs.go:123] Gathering logs for kube-controller-manager [a86df61f6344] ...
	I0311 13:56:38.808219    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a86df61f6344"
	I0311 13:56:38.820897    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:56:38.820909    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:56:38.860222    4162 logs.go:123] Gathering logs for etcd [a420e877d727] ...
	I0311 13:56:38.860233    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a420e877d727"
	I0311 13:56:38.873989    4162 logs.go:123] Gathering logs for coredns [f6b733455fba] ...
	I0311 13:56:38.874000    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b733455fba"
	I0311 13:56:38.885862    4162 logs.go:123] Gathering logs for storage-provisioner [349023bbfc53] ...
	I0311 13:56:38.885874    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349023bbfc53"
	I0311 13:56:38.897684    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:56:38.897699    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:56:38.912639    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:56:38.912649    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:56:38.917409    4162 logs.go:123] Gathering logs for kube-scheduler [3ed19e217722] ...
	I0311 13:56:38.917444    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed19e217722"
	I0311 13:56:38.936522    4162 logs.go:123] Gathering logs for kube-proxy [3d1e6e29e227] ...
	I0311 13:56:38.936535    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1e6e29e227"
	I0311 13:56:38.949864    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:56:38.949876    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:56:38.991952    4162 logs.go:123] Gathering logs for etcd [c8a6f9e10281] ...
	I0311 13:56:38.991965    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a6f9e10281"
	I0311 13:56:39.005811    4162 logs.go:123] Gathering logs for kube-controller-manager [81e691359feb] ...
	I0311 13:56:39.005828    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81e691359feb"
	I0311 13:56:39.023060    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:56:39.023072    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:56:41.549652    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:56:46.551787    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:56:46.551908    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:56:46.564280    4162 logs.go:276] 2 containers: [c646ff80a5b9 818456a37448]
	I0311 13:56:46.564354    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:56:46.576402    4162 logs.go:276] 2 containers: [c8a6f9e10281 a420e877d727]
	I0311 13:56:46.576481    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:56:46.591220    4162 logs.go:276] 2 containers: [02997ede6ee9 f6b733455fba]
	I0311 13:56:46.591297    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:56:46.601750    4162 logs.go:276] 2 containers: [b86cf6a9c6e7 3ed19e217722]
	I0311 13:56:46.601822    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:56:46.616516    4162 logs.go:276] 2 containers: [d9edcf2bd818 3d1e6e29e227]
	I0311 13:56:46.616580    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:56:46.627672    4162 logs.go:276] 2 containers: [81e691359feb a86df61f6344]
	I0311 13:56:46.627745    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:56:46.638029    4162 logs.go:276] 0 containers: []
	W0311 13:56:46.638042    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:56:46.638100    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:56:46.648968    4162 logs.go:276] 2 containers: [20d335c01a3b 349023bbfc53]
	I0311 13:56:46.648983    4162 logs.go:123] Gathering logs for storage-provisioner [349023bbfc53] ...
	I0311 13:56:46.648991    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349023bbfc53"
	I0311 13:56:46.660382    4162 logs.go:123] Gathering logs for etcd [c8a6f9e10281] ...
	I0311 13:56:46.660394    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a6f9e10281"
	I0311 13:56:46.674572    4162 logs.go:123] Gathering logs for kube-scheduler [b86cf6a9c6e7] ...
	I0311 13:56:46.674583    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86cf6a9c6e7"
	I0311 13:56:46.686064    4162 logs.go:123] Gathering logs for storage-provisioner [20d335c01a3b] ...
	I0311 13:56:46.686073    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20d335c01a3b"
	I0311 13:56:46.702860    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:56:46.702871    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:56:46.715622    4162 logs.go:123] Gathering logs for kube-apiserver [818456a37448] ...
	I0311 13:56:46.715637    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 818456a37448"
	I0311 13:56:46.735016    4162 logs.go:123] Gathering logs for kube-proxy [3d1e6e29e227] ...
	I0311 13:56:46.735026    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1e6e29e227"
	I0311 13:56:46.755981    4162 logs.go:123] Gathering logs for coredns [02997ede6ee9] ...
	I0311 13:56:46.755996    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02997ede6ee9"
	I0311 13:56:46.782353    4162 logs.go:123] Gathering logs for kube-proxy [d9edcf2bd818] ...
	I0311 13:56:46.782364    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9edcf2bd818"
	I0311 13:56:46.803743    4162 logs.go:123] Gathering logs for kube-controller-manager [a86df61f6344] ...
	I0311 13:56:46.803759    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a86df61f6344"
	I0311 13:56:46.816113    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:56:46.816127    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:56:46.839258    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:56:46.839267    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:56:46.880538    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:56:46.880548    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:56:46.884989    4162 logs.go:123] Gathering logs for etcd [a420e877d727] ...
	I0311 13:56:46.884994    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a420e877d727"
	I0311 13:56:46.898285    4162 logs.go:123] Gathering logs for coredns [f6b733455fba] ...
	I0311 13:56:46.898295    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b733455fba"
	I0311 13:56:46.910355    4162 logs.go:123] Gathering logs for kube-scheduler [3ed19e217722] ...
	I0311 13:56:46.910368    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed19e217722"
	I0311 13:56:46.924871    4162 logs.go:123] Gathering logs for kube-controller-manager [81e691359feb] ...
	I0311 13:56:46.924881    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81e691359feb"
	I0311 13:56:46.942493    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:56:46.942504    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:56:46.981152    4162 logs.go:123] Gathering logs for kube-apiserver [c646ff80a5b9] ...
	I0311 13:56:46.981163    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c646ff80a5b9"
	I0311 13:56:49.496720    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:56:54.498923    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:56:54.499087    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:56:54.519922    4162 logs.go:276] 2 containers: [c646ff80a5b9 818456a37448]
	I0311 13:56:54.520007    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:56:54.531252    4162 logs.go:276] 2 containers: [c8a6f9e10281 a420e877d727]
	I0311 13:56:54.531349    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:56:54.542017    4162 logs.go:276] 2 containers: [02997ede6ee9 f6b733455fba]
	I0311 13:56:54.542089    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:56:54.552978    4162 logs.go:276] 2 containers: [b86cf6a9c6e7 3ed19e217722]
	I0311 13:56:54.553049    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:56:54.563325    4162 logs.go:276] 2 containers: [d9edcf2bd818 3d1e6e29e227]
	I0311 13:56:54.563389    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:56:54.573674    4162 logs.go:276] 2 containers: [81e691359feb a86df61f6344]
	I0311 13:56:54.573748    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:56:54.583973    4162 logs.go:276] 0 containers: []
	W0311 13:56:54.583984    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:56:54.584044    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:56:54.594332    4162 logs.go:276] 2 containers: [20d335c01a3b 349023bbfc53]
	I0311 13:56:54.594347    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:56:54.594353    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:56:54.598788    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:56:54.598795    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:56:54.633750    4162 logs.go:123] Gathering logs for kube-controller-manager [a86df61f6344] ...
	I0311 13:56:54.633763    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a86df61f6344"
	I0311 13:56:54.645389    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:56:54.645402    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:56:54.668468    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:56:54.668476    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:56:54.680580    4162 logs.go:123] Gathering logs for coredns [02997ede6ee9] ...
	I0311 13:56:54.680591    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02997ede6ee9"
	I0311 13:56:54.691802    4162 logs.go:123] Gathering logs for kube-scheduler [b86cf6a9c6e7] ...
	I0311 13:56:54.691813    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86cf6a9c6e7"
	I0311 13:56:54.704765    4162 logs.go:123] Gathering logs for kube-scheduler [3ed19e217722] ...
	I0311 13:56:54.704777    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed19e217722"
	I0311 13:56:54.717126    4162 logs.go:123] Gathering logs for storage-provisioner [349023bbfc53] ...
	I0311 13:56:54.717137    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349023bbfc53"
	I0311 13:56:54.728350    4162 logs.go:123] Gathering logs for kube-apiserver [c646ff80a5b9] ...
	I0311 13:56:54.728364    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c646ff80a5b9"
	I0311 13:56:54.742628    4162 logs.go:123] Gathering logs for kube-apiserver [818456a37448] ...
	I0311 13:56:54.742641    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 818456a37448"
	I0311 13:56:54.766991    4162 logs.go:123] Gathering logs for etcd [c8a6f9e10281] ...
	I0311 13:56:54.767004    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a6f9e10281"
	I0311 13:56:54.787199    4162 logs.go:123] Gathering logs for etcd [a420e877d727] ...
	I0311 13:56:54.787212    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a420e877d727"
	I0311 13:56:54.801042    4162 logs.go:123] Gathering logs for coredns [f6b733455fba] ...
	I0311 13:56:54.801052    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b733455fba"
	I0311 13:56:54.812780    4162 logs.go:123] Gathering logs for kube-proxy [d9edcf2bd818] ...
	I0311 13:56:54.812792    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9edcf2bd818"
	I0311 13:56:54.824832    4162 logs.go:123] Gathering logs for kube-proxy [3d1e6e29e227] ...
	I0311 13:56:54.824842    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1e6e29e227"
	I0311 13:56:54.837374    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:56:54.837386    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:56:54.879698    4162 logs.go:123] Gathering logs for kube-controller-manager [81e691359feb] ...
	I0311 13:56:54.879711    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81e691359feb"
	I0311 13:56:54.896895    4162 logs.go:123] Gathering logs for storage-provisioner [20d335c01a3b] ...
	I0311 13:56:54.896906    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20d335c01a3b"
	I0311 13:56:57.415368    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:57:02.417850    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:57:02.418091    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:57:02.438086    4162 logs.go:276] 2 containers: [c646ff80a5b9 818456a37448]
	I0311 13:57:02.438186    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:57:02.452980    4162 logs.go:276] 2 containers: [c8a6f9e10281 a420e877d727]
	I0311 13:57:02.453061    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:57:02.465332    4162 logs.go:276] 2 containers: [02997ede6ee9 f6b733455fba]
	I0311 13:57:02.465403    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:57:02.476928    4162 logs.go:276] 2 containers: [b86cf6a9c6e7 3ed19e217722]
	I0311 13:57:02.476994    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:57:02.487953    4162 logs.go:276] 2 containers: [d9edcf2bd818 3d1e6e29e227]
	I0311 13:57:02.488017    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:57:02.503212    4162 logs.go:276] 2 containers: [81e691359feb a86df61f6344]
	I0311 13:57:02.503280    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:57:02.513240    4162 logs.go:276] 0 containers: []
	W0311 13:57:02.513250    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:57:02.513304    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:57:02.527886    4162 logs.go:276] 2 containers: [20d335c01a3b 349023bbfc53]
	I0311 13:57:02.527905    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:57:02.527910    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:57:02.532515    4162 logs.go:123] Gathering logs for etcd [c8a6f9e10281] ...
	I0311 13:57:02.532522    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a6f9e10281"
	I0311 13:57:02.546479    4162 logs.go:123] Gathering logs for kube-controller-manager [a86df61f6344] ...
	I0311 13:57:02.546494    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a86df61f6344"
	I0311 13:57:02.558164    4162 logs.go:123] Gathering logs for storage-provisioner [349023bbfc53] ...
	I0311 13:57:02.558176    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349023bbfc53"
	I0311 13:57:02.569479    4162 logs.go:123] Gathering logs for kube-apiserver [818456a37448] ...
	I0311 13:57:02.569491    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 818456a37448"
	I0311 13:57:02.589456    4162 logs.go:123] Gathering logs for etcd [a420e877d727] ...
	I0311 13:57:02.589466    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a420e877d727"
	I0311 13:57:02.603357    4162 logs.go:123] Gathering logs for coredns [02997ede6ee9] ...
	I0311 13:57:02.603367    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02997ede6ee9"
	I0311 13:57:02.614361    4162 logs.go:123] Gathering logs for kube-scheduler [b86cf6a9c6e7] ...
	I0311 13:57:02.614372    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86cf6a9c6e7"
	I0311 13:57:02.626254    4162 logs.go:123] Gathering logs for kube-proxy [3d1e6e29e227] ...
	I0311 13:57:02.626264    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1e6e29e227"
	I0311 13:57:02.637732    4162 logs.go:123] Gathering logs for kube-controller-manager [81e691359feb] ...
	I0311 13:57:02.637742    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81e691359feb"
	I0311 13:57:02.655345    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:57:02.655359    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:57:02.691740    4162 logs.go:123] Gathering logs for kube-apiserver [c646ff80a5b9] ...
	I0311 13:57:02.691752    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c646ff80a5b9"
	I0311 13:57:02.705692    4162 logs.go:123] Gathering logs for coredns [f6b733455fba] ...
	I0311 13:57:02.705705    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b733455fba"
	I0311 13:57:02.717080    4162 logs.go:123] Gathering logs for kube-scheduler [3ed19e217722] ...
	I0311 13:57:02.717092    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed19e217722"
	I0311 13:57:02.733672    4162 logs.go:123] Gathering logs for kube-proxy [d9edcf2bd818] ...
	I0311 13:57:02.733683    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9edcf2bd818"
	I0311 13:57:02.745843    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:57:02.745852    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:57:02.769068    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:57:02.769075    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:57:02.810347    4162 logs.go:123] Gathering logs for storage-provisioner [20d335c01a3b] ...
	I0311 13:57:02.810363    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20d335c01a3b"
	I0311 13:57:02.823563    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:57:02.823576    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:57:05.337693    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:57:10.340151    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:57:10.340395    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:57:10.364930    4162 logs.go:276] 2 containers: [c646ff80a5b9 818456a37448]
	I0311 13:57:10.365067    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:57:10.387278    4162 logs.go:276] 2 containers: [c8a6f9e10281 a420e877d727]
	I0311 13:57:10.387356    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:57:10.399254    4162 logs.go:276] 2 containers: [02997ede6ee9 f6b733455fba]
	I0311 13:57:10.399331    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:57:10.414061    4162 logs.go:276] 2 containers: [b86cf6a9c6e7 3ed19e217722]
	I0311 13:57:10.414131    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:57:10.424499    4162 logs.go:276] 2 containers: [d9edcf2bd818 3d1e6e29e227]
	I0311 13:57:10.424567    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:57:10.435252    4162 logs.go:276] 2 containers: [81e691359feb a86df61f6344]
	I0311 13:57:10.435353    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:57:10.445520    4162 logs.go:276] 0 containers: []
	W0311 13:57:10.445531    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:57:10.445591    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:57:10.455953    4162 logs.go:276] 2 containers: [20d335c01a3b 349023bbfc53]
	I0311 13:57:10.455967    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:57:10.455973    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:57:10.460794    4162 logs.go:123] Gathering logs for kube-apiserver [c646ff80a5b9] ...
	I0311 13:57:10.460802    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c646ff80a5b9"
	I0311 13:57:10.478449    4162 logs.go:123] Gathering logs for etcd [c8a6f9e10281] ...
	I0311 13:57:10.478460    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a6f9e10281"
	I0311 13:57:10.493256    4162 logs.go:123] Gathering logs for coredns [f6b733455fba] ...
	I0311 13:57:10.493266    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b733455fba"
	I0311 13:57:10.505556    4162 logs.go:123] Gathering logs for kube-scheduler [b86cf6a9c6e7] ...
	I0311 13:57:10.505569    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86cf6a9c6e7"
	I0311 13:57:10.520956    4162 logs.go:123] Gathering logs for kube-controller-manager [a86df61f6344] ...
	I0311 13:57:10.520967    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a86df61f6344"
	I0311 13:57:10.532197    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:57:10.532209    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:57:10.573144    4162 logs.go:123] Gathering logs for etcd [a420e877d727] ...
	I0311 13:57:10.573154    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a420e877d727"
	I0311 13:57:10.586374    4162 logs.go:123] Gathering logs for kube-proxy [d9edcf2bd818] ...
	I0311 13:57:10.586384    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9edcf2bd818"
	I0311 13:57:10.597797    4162 logs.go:123] Gathering logs for kube-controller-manager [81e691359feb] ...
	I0311 13:57:10.597811    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81e691359feb"
	I0311 13:57:10.615389    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:57:10.615402    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:57:10.628534    4162 logs.go:123] Gathering logs for kube-scheduler [3ed19e217722] ...
	I0311 13:57:10.628544    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed19e217722"
	I0311 13:57:10.640395    4162 logs.go:123] Gathering logs for storage-provisioner [20d335c01a3b] ...
	I0311 13:57:10.640404    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20d335c01a3b"
	I0311 13:57:10.651799    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:57:10.651811    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:57:10.674016    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:57:10.674023    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:57:10.709330    4162 logs.go:123] Gathering logs for kube-apiserver [818456a37448] ...
	I0311 13:57:10.709342    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 818456a37448"
	I0311 13:57:10.728914    4162 logs.go:123] Gathering logs for coredns [02997ede6ee9] ...
	I0311 13:57:10.728927    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02997ede6ee9"
	I0311 13:57:10.739992    4162 logs.go:123] Gathering logs for kube-proxy [3d1e6e29e227] ...
	I0311 13:57:10.740003    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1e6e29e227"
	I0311 13:57:10.750896    4162 logs.go:123] Gathering logs for storage-provisioner [349023bbfc53] ...
	I0311 13:57:10.750906    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349023bbfc53"
	I0311 13:57:13.264216    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:57:18.266458    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:57:18.266680    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:57:18.286255    4162 logs.go:276] 2 containers: [c646ff80a5b9 818456a37448]
	I0311 13:57:18.286356    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:57:18.304536    4162 logs.go:276] 2 containers: [c8a6f9e10281 a420e877d727]
	I0311 13:57:18.304609    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:57:18.315702    4162 logs.go:276] 2 containers: [02997ede6ee9 f6b733455fba]
	I0311 13:57:18.315765    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:57:18.326029    4162 logs.go:276] 2 containers: [b86cf6a9c6e7 3ed19e217722]
	I0311 13:57:18.326090    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:57:18.336945    4162 logs.go:276] 2 containers: [d9edcf2bd818 3d1e6e29e227]
	I0311 13:57:18.337017    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:57:18.348163    4162 logs.go:276] 2 containers: [81e691359feb a86df61f6344]
	I0311 13:57:18.348227    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:57:18.358477    4162 logs.go:276] 0 containers: []
	W0311 13:57:18.358489    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:57:18.358551    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:57:18.369207    4162 logs.go:276] 2 containers: [20d335c01a3b 349023bbfc53]
	I0311 13:57:18.369224    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:57:18.369230    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:57:18.406838    4162 logs.go:123] Gathering logs for kube-controller-manager [81e691359feb] ...
	I0311 13:57:18.406848    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81e691359feb"
	I0311 13:57:18.424604    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:57:18.424617    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:57:18.447971    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:57:18.447986    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:57:18.460524    4162 logs.go:123] Gathering logs for kube-scheduler [3ed19e217722] ...
	I0311 13:57:18.460538    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed19e217722"
	I0311 13:57:18.472507    4162 logs.go:123] Gathering logs for kube-proxy [d9edcf2bd818] ...
	I0311 13:57:18.472517    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9edcf2bd818"
	I0311 13:57:18.484472    4162 logs.go:123] Gathering logs for kube-controller-manager [a86df61f6344] ...
	I0311 13:57:18.484485    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a86df61f6344"
	I0311 13:57:18.495787    4162 logs.go:123] Gathering logs for storage-provisioner [20d335c01a3b] ...
	I0311 13:57:18.495798    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20d335c01a3b"
	I0311 13:57:18.508258    4162 logs.go:123] Gathering logs for kube-apiserver [818456a37448] ...
	I0311 13:57:18.508268    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 818456a37448"
	I0311 13:57:18.527810    4162 logs.go:123] Gathering logs for etcd [a420e877d727] ...
	I0311 13:57:18.527818    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a420e877d727"
	I0311 13:57:18.542722    4162 logs.go:123] Gathering logs for coredns [02997ede6ee9] ...
	I0311 13:57:18.542734    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02997ede6ee9"
	I0311 13:57:18.581055    4162 logs.go:123] Gathering logs for kube-scheduler [b86cf6a9c6e7] ...
	I0311 13:57:18.581068    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86cf6a9c6e7"
	I0311 13:57:18.592724    4162 logs.go:123] Gathering logs for storage-provisioner [349023bbfc53] ...
	I0311 13:57:18.592732    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349023bbfc53"
	I0311 13:57:18.604258    4162 logs.go:123] Gathering logs for coredns [f6b733455fba] ...
	I0311 13:57:18.604273    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b733455fba"
	I0311 13:57:18.615941    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:57:18.615951    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:57:18.656776    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:57:18.656784    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:57:18.661278    4162 logs.go:123] Gathering logs for kube-apiserver [c646ff80a5b9] ...
	I0311 13:57:18.661286    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c646ff80a5b9"
	I0311 13:57:18.675399    4162 logs.go:123] Gathering logs for etcd [c8a6f9e10281] ...
	I0311 13:57:18.675412    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a6f9e10281"
	I0311 13:57:18.695051    4162 logs.go:123] Gathering logs for kube-proxy [3d1e6e29e227] ...
	I0311 13:57:18.695062    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1e6e29e227"
	I0311 13:57:21.208116    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:57:26.210068    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:57:26.210183    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:57:26.221842    4162 logs.go:276] 2 containers: [c646ff80a5b9 818456a37448]
	I0311 13:57:26.221910    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:57:26.233231    4162 logs.go:276] 2 containers: [c8a6f9e10281 a420e877d727]
	I0311 13:57:26.233296    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:57:26.247745    4162 logs.go:276] 2 containers: [02997ede6ee9 f6b733455fba]
	I0311 13:57:26.247828    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:57:26.260627    4162 logs.go:276] 2 containers: [b86cf6a9c6e7 3ed19e217722]
	I0311 13:57:26.260695    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:57:26.272086    4162 logs.go:276] 2 containers: [d9edcf2bd818 3d1e6e29e227]
	I0311 13:57:26.272142    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:57:26.283002    4162 logs.go:276] 2 containers: [81e691359feb a86df61f6344]
	I0311 13:57:26.283072    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:57:26.294859    4162 logs.go:276] 0 containers: []
	W0311 13:57:26.294873    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:57:26.294930    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:57:26.305333    4162 logs.go:276] 2 containers: [20d335c01a3b 349023bbfc53]
	I0311 13:57:26.305349    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:57:26.305355    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:57:26.346633    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:57:26.346646    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:57:26.350977    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:57:26.350985    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:57:26.386445    4162 logs.go:123] Gathering logs for kube-scheduler [3ed19e217722] ...
	I0311 13:57:26.386459    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed19e217722"
	I0311 13:57:26.398625    4162 logs.go:123] Gathering logs for storage-provisioner [349023bbfc53] ...
	I0311 13:57:26.398636    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349023bbfc53"
	I0311 13:57:26.414457    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:57:26.414470    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:57:26.436061    4162 logs.go:123] Gathering logs for etcd [a420e877d727] ...
	I0311 13:57:26.436070    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a420e877d727"
	I0311 13:57:26.449039    4162 logs.go:123] Gathering logs for coredns [02997ede6ee9] ...
	I0311 13:57:26.449049    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02997ede6ee9"
	I0311 13:57:26.460408    4162 logs.go:123] Gathering logs for coredns [f6b733455fba] ...
	I0311 13:57:26.460419    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b733455fba"
	I0311 13:57:26.471880    4162 logs.go:123] Gathering logs for kube-scheduler [b86cf6a9c6e7] ...
	I0311 13:57:26.471893    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86cf6a9c6e7"
	I0311 13:57:26.483791    4162 logs.go:123] Gathering logs for kube-proxy [d9edcf2bd818] ...
	I0311 13:57:26.483801    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9edcf2bd818"
	I0311 13:57:26.495148    4162 logs.go:123] Gathering logs for kube-controller-manager [a86df61f6344] ...
	I0311 13:57:26.495158    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a86df61f6344"
	I0311 13:57:26.506369    4162 logs.go:123] Gathering logs for kube-apiserver [818456a37448] ...
	I0311 13:57:26.506380    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 818456a37448"
	I0311 13:57:26.526248    4162 logs.go:123] Gathering logs for kube-controller-manager [81e691359feb] ...
	I0311 13:57:26.526258    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81e691359feb"
	I0311 13:57:26.544172    4162 logs.go:123] Gathering logs for storage-provisioner [20d335c01a3b] ...
	I0311 13:57:26.544182    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20d335c01a3b"
	I0311 13:57:26.555933    4162 logs.go:123] Gathering logs for kube-apiserver [c646ff80a5b9] ...
	I0311 13:57:26.555946    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c646ff80a5b9"
	I0311 13:57:26.569727    4162 logs.go:123] Gathering logs for etcd [c8a6f9e10281] ...
	I0311 13:57:26.569737    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a6f9e10281"
	I0311 13:57:26.583748    4162 logs.go:123] Gathering logs for kube-proxy [3d1e6e29e227] ...
	I0311 13:57:26.583758    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1e6e29e227"
	I0311 13:57:26.594725    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:57:26.594735    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:57:29.108962    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:57:34.111109    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:57:34.111237    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:57:34.122284    4162 logs.go:276] 2 containers: [c646ff80a5b9 818456a37448]
	I0311 13:57:34.122357    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:57:34.133077    4162 logs.go:276] 2 containers: [c8a6f9e10281 a420e877d727]
	I0311 13:57:34.133140    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:57:34.142981    4162 logs.go:276] 2 containers: [02997ede6ee9 f6b733455fba]
	I0311 13:57:34.143055    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:57:34.153807    4162 logs.go:276] 2 containers: [b86cf6a9c6e7 3ed19e217722]
	I0311 13:57:34.153870    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:57:34.164741    4162 logs.go:276] 2 containers: [d9edcf2bd818 3d1e6e29e227]
	I0311 13:57:34.164807    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:57:34.179260    4162 logs.go:276] 2 containers: [81e691359feb a86df61f6344]
	I0311 13:57:34.179335    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:57:34.189170    4162 logs.go:276] 0 containers: []
	W0311 13:57:34.189188    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:57:34.189248    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:57:34.199889    4162 logs.go:276] 2 containers: [20d335c01a3b 349023bbfc53]
	I0311 13:57:34.199908    4162 logs.go:123] Gathering logs for kube-apiserver [c646ff80a5b9] ...
	I0311 13:57:34.199913    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c646ff80a5b9"
	I0311 13:57:34.213805    4162 logs.go:123] Gathering logs for etcd [c8a6f9e10281] ...
	I0311 13:57:34.213820    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a6f9e10281"
	I0311 13:57:34.227362    4162 logs.go:123] Gathering logs for etcd [a420e877d727] ...
	I0311 13:57:34.227372    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a420e877d727"
	I0311 13:57:34.240408    4162 logs.go:123] Gathering logs for kube-scheduler [3ed19e217722] ...
	I0311 13:57:34.240418    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed19e217722"
	I0311 13:57:34.255557    4162 logs.go:123] Gathering logs for kube-proxy [3d1e6e29e227] ...
	I0311 13:57:34.255572    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1e6e29e227"
	I0311 13:57:34.266925    4162 logs.go:123] Gathering logs for kube-controller-manager [81e691359feb] ...
	I0311 13:57:34.266938    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81e691359feb"
	I0311 13:57:34.283990    4162 logs.go:123] Gathering logs for storage-provisioner [20d335c01a3b] ...
	I0311 13:57:34.283999    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20d335c01a3b"
	I0311 13:57:34.296094    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:57:34.296103    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:57:34.330829    4162 logs.go:123] Gathering logs for kube-apiserver [818456a37448] ...
	I0311 13:57:34.330838    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 818456a37448"
	I0311 13:57:34.351484    4162 logs.go:123] Gathering logs for storage-provisioner [349023bbfc53] ...
	I0311 13:57:34.351495    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349023bbfc53"
	I0311 13:57:34.362549    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:57:34.362561    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:57:34.367126    4162 logs.go:123] Gathering logs for kube-proxy [d9edcf2bd818] ...
	I0311 13:57:34.367136    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9edcf2bd818"
	I0311 13:57:34.378631    4162 logs.go:123] Gathering logs for kube-controller-manager [a86df61f6344] ...
	I0311 13:57:34.378643    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a86df61f6344"
	I0311 13:57:34.390082    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:57:34.390100    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:57:34.411875    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:57:34.411884    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:57:34.424328    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:57:34.424342    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:57:34.463347    4162 logs.go:123] Gathering logs for coredns [f6b733455fba] ...
	I0311 13:57:34.463359    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b733455fba"
	I0311 13:57:34.476880    4162 logs.go:123] Gathering logs for kube-scheduler [b86cf6a9c6e7] ...
	I0311 13:57:34.476891    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86cf6a9c6e7"
	I0311 13:57:34.488844    4162 logs.go:123] Gathering logs for coredns [02997ede6ee9] ...
	I0311 13:57:34.488854    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02997ede6ee9"
	I0311 13:57:37.002149    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:57:42.004284    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:57:42.004378    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:57:42.016892    4162 logs.go:276] 2 containers: [c646ff80a5b9 818456a37448]
	I0311 13:57:42.016968    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:57:42.027500    4162 logs.go:276] 2 containers: [c8a6f9e10281 a420e877d727]
	I0311 13:57:42.027573    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:57:42.038531    4162 logs.go:276] 2 containers: [02997ede6ee9 f6b733455fba]
	I0311 13:57:42.038606    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:57:42.050199    4162 logs.go:276] 2 containers: [b86cf6a9c6e7 3ed19e217722]
	I0311 13:57:42.050269    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:57:42.061001    4162 logs.go:276] 2 containers: [d9edcf2bd818 3d1e6e29e227]
	I0311 13:57:42.061065    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:57:42.071682    4162 logs.go:276] 2 containers: [81e691359feb a86df61f6344]
	I0311 13:57:42.071752    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:57:42.082011    4162 logs.go:276] 0 containers: []
	W0311 13:57:42.082024    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:57:42.082083    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:57:42.092514    4162 logs.go:276] 2 containers: [20d335c01a3b 349023bbfc53]
	I0311 13:57:42.092529    4162 logs.go:123] Gathering logs for storage-provisioner [349023bbfc53] ...
	I0311 13:57:42.092534    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349023bbfc53"
	I0311 13:57:42.103564    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:57:42.103574    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:57:42.144748    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:57:42.144761    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:57:42.185587    4162 logs.go:123] Gathering logs for kube-proxy [d9edcf2bd818] ...
	I0311 13:57:42.185600    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9edcf2bd818"
	I0311 13:57:42.197814    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:57:42.197829    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:57:42.209582    4162 logs.go:123] Gathering logs for coredns [02997ede6ee9] ...
	I0311 13:57:42.209593    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02997ede6ee9"
	I0311 13:57:42.221382    4162 logs.go:123] Gathering logs for coredns [f6b733455fba] ...
	I0311 13:57:42.221394    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b733455fba"
	I0311 13:57:42.233079    4162 logs.go:123] Gathering logs for kube-controller-manager [a86df61f6344] ...
	I0311 13:57:42.233094    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a86df61f6344"
	I0311 13:57:42.244437    4162 logs.go:123] Gathering logs for etcd [a420e877d727] ...
	I0311 13:57:42.244448    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a420e877d727"
	I0311 13:57:42.257398    4162 logs.go:123] Gathering logs for kube-scheduler [3ed19e217722] ...
	I0311 13:57:42.257410    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed19e217722"
	I0311 13:57:42.269895    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:57:42.269906    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:57:42.274516    4162 logs.go:123] Gathering logs for kube-apiserver [818456a37448] ...
	I0311 13:57:42.274527    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 818456a37448"
	I0311 13:57:42.294597    4162 logs.go:123] Gathering logs for etcd [c8a6f9e10281] ...
	I0311 13:57:42.294607    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a6f9e10281"
	I0311 13:57:42.308132    4162 logs.go:123] Gathering logs for kube-controller-manager [81e691359feb] ...
	I0311 13:57:42.308141    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81e691359feb"
	I0311 13:57:42.325729    4162 logs.go:123] Gathering logs for storage-provisioner [20d335c01a3b] ...
	I0311 13:57:42.325739    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20d335c01a3b"
	I0311 13:57:42.337722    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:57:42.337733    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:57:42.360414    4162 logs.go:123] Gathering logs for kube-apiserver [c646ff80a5b9] ...
	I0311 13:57:42.360424    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c646ff80a5b9"
	I0311 13:57:42.374054    4162 logs.go:123] Gathering logs for kube-scheduler [b86cf6a9c6e7] ...
	I0311 13:57:42.374063    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86cf6a9c6e7"
	I0311 13:57:42.386411    4162 logs.go:123] Gathering logs for kube-proxy [3d1e6e29e227] ...
	I0311 13:57:42.386422    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1e6e29e227"
	I0311 13:57:44.899709    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:57:49.902168    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:57:49.902326    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:57:49.918670    4162 logs.go:276] 2 containers: [c646ff80a5b9 818456a37448]
	I0311 13:57:49.918748    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:57:49.930673    4162 logs.go:276] 2 containers: [c8a6f9e10281 a420e877d727]
	I0311 13:57:49.930743    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:57:49.942291    4162 logs.go:276] 2 containers: [02997ede6ee9 f6b733455fba]
	I0311 13:57:49.942365    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:57:49.954702    4162 logs.go:276] 2 containers: [b86cf6a9c6e7 3ed19e217722]
	I0311 13:57:49.954779    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:57:49.967055    4162 logs.go:276] 2 containers: [d9edcf2bd818 3d1e6e29e227]
	I0311 13:57:49.967131    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:57:49.979237    4162 logs.go:276] 2 containers: [81e691359feb a86df61f6344]
	I0311 13:57:49.979319    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:57:49.990477    4162 logs.go:276] 0 containers: []
	W0311 13:57:49.990490    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:57:49.990553    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:57:50.001383    4162 logs.go:276] 2 containers: [20d335c01a3b 349023bbfc53]
	I0311 13:57:50.001398    4162 logs.go:123] Gathering logs for storage-provisioner [20d335c01a3b] ...
	I0311 13:57:50.001403    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20d335c01a3b"
	I0311 13:57:50.017131    4162 logs.go:123] Gathering logs for storage-provisioner [349023bbfc53] ...
	I0311 13:57:50.017144    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349023bbfc53"
	I0311 13:57:50.033584    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:57:50.033599    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:57:50.047055    4162 logs.go:123] Gathering logs for etcd [c8a6f9e10281] ...
	I0311 13:57:50.047067    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a6f9e10281"
	I0311 13:57:50.061937    4162 logs.go:123] Gathering logs for kube-scheduler [b86cf6a9c6e7] ...
	I0311 13:57:50.061954    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86cf6a9c6e7"
	I0311 13:57:50.079723    4162 logs.go:123] Gathering logs for kube-scheduler [3ed19e217722] ...
	I0311 13:57:50.079736    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed19e217722"
	I0311 13:57:50.092757    4162 logs.go:123] Gathering logs for kube-controller-manager [81e691359feb] ...
	I0311 13:57:50.092769    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81e691359feb"
	I0311 13:57:50.115742    4162 logs.go:123] Gathering logs for kube-controller-manager [a86df61f6344] ...
	I0311 13:57:50.115754    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a86df61f6344"
	I0311 13:57:50.126807    4162 logs.go:123] Gathering logs for kube-apiserver [c646ff80a5b9] ...
	I0311 13:57:50.126821    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c646ff80a5b9"
	I0311 13:57:50.144120    4162 logs.go:123] Gathering logs for coredns [f6b733455fba] ...
	I0311 13:57:50.144134    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b733455fba"
	I0311 13:57:50.155872    4162 logs.go:123] Gathering logs for coredns [02997ede6ee9] ...
	I0311 13:57:50.155884    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02997ede6ee9"
	I0311 13:57:50.171536    4162 logs.go:123] Gathering logs for kube-proxy [3d1e6e29e227] ...
	I0311 13:57:50.171550    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1e6e29e227"
	I0311 13:57:50.182660    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:57:50.182671    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:57:50.206355    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:57:50.206376    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:57:50.212255    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:57:50.212269    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:57:50.257471    4162 logs.go:123] Gathering logs for etcd [a420e877d727] ...
	I0311 13:57:50.257484    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a420e877d727"
	I0311 13:57:50.271583    4162 logs.go:123] Gathering logs for kube-proxy [d9edcf2bd818] ...
	I0311 13:57:50.271596    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9edcf2bd818"
	I0311 13:57:50.284703    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:57:50.284715    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:57:50.327677    4162 logs.go:123] Gathering logs for kube-apiserver [818456a37448] ...
	I0311 13:57:50.327690    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 818456a37448"
	I0311 13:57:52.850406    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:57:57.853323    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:57:57.853446    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:57:57.865996    4162 logs.go:276] 2 containers: [c646ff80a5b9 818456a37448]
	I0311 13:57:57.866071    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:57:57.877139    4162 logs.go:276] 2 containers: [c8a6f9e10281 a420e877d727]
	I0311 13:57:57.877206    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:57:57.888163    4162 logs.go:276] 2 containers: [02997ede6ee9 f6b733455fba]
	I0311 13:57:57.888225    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:57:57.899303    4162 logs.go:276] 2 containers: [b86cf6a9c6e7 3ed19e217722]
	I0311 13:57:57.899374    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:57:57.909848    4162 logs.go:276] 2 containers: [d9edcf2bd818 3d1e6e29e227]
	I0311 13:57:57.909907    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:57:57.925423    4162 logs.go:276] 2 containers: [81e691359feb a86df61f6344]
	I0311 13:57:57.925501    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:57:57.936204    4162 logs.go:276] 0 containers: []
	W0311 13:57:57.936215    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:57:57.936275    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:57:57.946693    4162 logs.go:276] 2 containers: [20d335c01a3b 349023bbfc53]
	I0311 13:57:57.946711    4162 logs.go:123] Gathering logs for storage-provisioner [349023bbfc53] ...
	I0311 13:57:57.946717    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349023bbfc53"
	I0311 13:57:57.957673    4162 logs.go:123] Gathering logs for kube-scheduler [b86cf6a9c6e7] ...
	I0311 13:57:57.957685    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86cf6a9c6e7"
	I0311 13:57:57.971295    4162 logs.go:123] Gathering logs for etcd [c8a6f9e10281] ...
	I0311 13:57:57.971306    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a6f9e10281"
	I0311 13:57:57.985336    4162 logs.go:123] Gathering logs for coredns [02997ede6ee9] ...
	I0311 13:57:57.985347    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02997ede6ee9"
	I0311 13:57:57.996365    4162 logs.go:123] Gathering logs for kube-proxy [d9edcf2bd818] ...
	I0311 13:57:57.996376    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9edcf2bd818"
	I0311 13:57:58.008177    4162 logs.go:123] Gathering logs for kube-controller-manager [81e691359feb] ...
	I0311 13:57:58.008187    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81e691359feb"
	I0311 13:57:58.024795    4162 logs.go:123] Gathering logs for kube-controller-manager [a86df61f6344] ...
	I0311 13:57:58.024807    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a86df61f6344"
	I0311 13:57:58.036785    4162 logs.go:123] Gathering logs for kube-apiserver [c646ff80a5b9] ...
	I0311 13:57:58.036799    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c646ff80a5b9"
	I0311 13:57:58.054765    4162 logs.go:123] Gathering logs for kube-apiserver [818456a37448] ...
	I0311 13:57:58.054777    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 818456a37448"
	I0311 13:57:58.073431    4162 logs.go:123] Gathering logs for kube-scheduler [3ed19e217722] ...
	I0311 13:57:58.073442    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed19e217722"
	I0311 13:57:58.086038    4162 logs.go:123] Gathering logs for kube-proxy [3d1e6e29e227] ...
	I0311 13:57:58.086049    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1e6e29e227"
	I0311 13:57:58.097459    4162 logs.go:123] Gathering logs for storage-provisioner [20d335c01a3b] ...
	I0311 13:57:58.097470    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20d335c01a3b"
	I0311 13:57:58.108837    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:57:58.108848    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:57:58.121230    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:57:58.121241    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:57:58.166486    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:57:58.166501    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:57:58.201440    4162 logs.go:123] Gathering logs for etcd [a420e877d727] ...
	I0311 13:57:58.201453    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a420e877d727"
	I0311 13:57:58.215473    4162 logs.go:123] Gathering logs for coredns [f6b733455fba] ...
	I0311 13:57:58.215488    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b733455fba"
	I0311 13:57:58.226654    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:57:58.226666    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:57:58.247815    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:57:58.247825    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:58:00.754454    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:58:05.756967    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:58:05.757125    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:58:05.773264    4162 logs.go:276] 2 containers: [c646ff80a5b9 818456a37448]
	I0311 13:58:05.773362    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:58:05.785316    4162 logs.go:276] 2 containers: [c8a6f9e10281 a420e877d727]
	I0311 13:58:05.785392    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:58:05.795799    4162 logs.go:276] 2 containers: [02997ede6ee9 f6b733455fba]
	I0311 13:58:05.795864    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:58:05.806287    4162 logs.go:276] 2 containers: [b86cf6a9c6e7 3ed19e217722]
	I0311 13:58:05.806358    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:58:05.817115    4162 logs.go:276] 2 containers: [d9edcf2bd818 3d1e6e29e227]
	I0311 13:58:05.817190    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:58:05.828056    4162 logs.go:276] 2 containers: [81e691359feb a86df61f6344]
	I0311 13:58:05.828128    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:58:05.839704    4162 logs.go:276] 0 containers: []
	W0311 13:58:05.839715    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:58:05.839772    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:58:05.850422    4162 logs.go:276] 2 containers: [20d335c01a3b 349023bbfc53]
	I0311 13:58:05.850438    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:58:05.850444    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:58:05.855060    4162 logs.go:123] Gathering logs for kube-apiserver [c646ff80a5b9] ...
	I0311 13:58:05.855068    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c646ff80a5b9"
	I0311 13:58:05.868723    4162 logs.go:123] Gathering logs for coredns [f6b733455fba] ...
	I0311 13:58:05.868735    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6b733455fba"
	I0311 13:58:05.879929    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:58:05.879939    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:58:05.893124    4162 logs.go:123] Gathering logs for kube-controller-manager [81e691359feb] ...
	I0311 13:58:05.893134    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81e691359feb"
	I0311 13:58:05.911262    4162 logs.go:123] Gathering logs for storage-provisioner [349023bbfc53] ...
	I0311 13:58:05.911273    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349023bbfc53"
	I0311 13:58:05.923004    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:58:05.923015    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:58:05.959519    4162 logs.go:123] Gathering logs for kube-apiserver [818456a37448] ...
	I0311 13:58:05.959532    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 818456a37448"
	I0311 13:58:05.989288    4162 logs.go:123] Gathering logs for etcd [c8a6f9e10281] ...
	I0311 13:58:05.989298    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8a6f9e10281"
	I0311 13:58:06.003563    4162 logs.go:123] Gathering logs for kube-proxy [d9edcf2bd818] ...
	I0311 13:58:06.003572    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9edcf2bd818"
	I0311 13:58:06.015464    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:58:06.015475    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:58:06.055477    4162 logs.go:123] Gathering logs for etcd [a420e877d727] ...
	I0311 13:58:06.055492    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a420e877d727"
	I0311 13:58:06.069323    4162 logs.go:123] Gathering logs for kube-proxy [3d1e6e29e227] ...
	I0311 13:58:06.069336    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d1e6e29e227"
	I0311 13:58:06.080196    4162 logs.go:123] Gathering logs for storage-provisioner [20d335c01a3b] ...
	I0311 13:58:06.080207    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20d335c01a3b"
	I0311 13:58:06.091329    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:58:06.091340    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:58:06.112019    4162 logs.go:123] Gathering logs for coredns [02997ede6ee9] ...
	I0311 13:58:06.112025    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02997ede6ee9"
	I0311 13:58:06.123626    4162 logs.go:123] Gathering logs for kube-scheduler [b86cf6a9c6e7] ...
	I0311 13:58:06.123637    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b86cf6a9c6e7"
	I0311 13:58:06.135329    4162 logs.go:123] Gathering logs for kube-scheduler [3ed19e217722] ...
	I0311 13:58:06.135342    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed19e217722"
	I0311 13:58:06.147400    4162 logs.go:123] Gathering logs for kube-controller-manager [a86df61f6344] ...
	I0311 13:58:06.147412    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a86df61f6344"
	I0311 13:58:08.660927    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:58:13.663067    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:58:13.663127    4162 kubeadm.go:591] duration metric: took 4m13.136380542s to restartPrimaryControlPlane
	W0311 13:58:13.663176    4162 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 13:58:13.663194    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0311 13:58:14.715816    4162 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.05264525s)
	I0311 13:58:14.715884    4162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 13:58:14.720896    4162 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 13:58:14.723990    4162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 13:58:14.726524    4162 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 13:58:14.726531    4162 kubeadm.go:156] found existing configuration files:
	
	I0311 13:58:14.726556    4162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50368 /etc/kubernetes/admin.conf
	I0311 13:58:14.728996    4162 kubeadm.go:162] "https://control-plane.minikube.internal:50368" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50368 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 13:58:14.729021    4162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 13:58:14.731961    4162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50368 /etc/kubernetes/kubelet.conf
	I0311 13:58:14.734667    4162 kubeadm.go:162] "https://control-plane.minikube.internal:50368" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50368 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 13:58:14.734695    4162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 13:58:14.737264    4162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50368 /etc/kubernetes/controller-manager.conf
	I0311 13:58:14.740163    4162 kubeadm.go:162] "https://control-plane.minikube.internal:50368" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50368 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 13:58:14.740185    4162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 13:58:14.742729    4162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50368 /etc/kubernetes/scheduler.conf
	I0311 13:58:14.745187    4162 kubeadm.go:162] "https://control-plane.minikube.internal:50368" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50368 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 13:58:14.745207    4162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 13:58:14.748131    4162 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 13:58:14.767382    4162 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0311 13:58:14.767418    4162 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 13:58:14.832253    4162 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 13:58:14.832313    4162 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 13:58:14.832380    4162 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 13:58:14.881766    4162 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 13:58:14.886008    4162 out.go:204]   - Generating certificates and keys ...
	I0311 13:58:14.886049    4162 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 13:58:14.886085    4162 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 13:58:14.886118    4162 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 13:58:14.886154    4162 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 13:58:14.886189    4162 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 13:58:14.886223    4162 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 13:58:14.886254    4162 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 13:58:14.886282    4162 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 13:58:14.886317    4162 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 13:58:14.886349    4162 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 13:58:14.886371    4162 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 13:58:14.886407    4162 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 13:58:15.250138    4162 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 13:58:15.424421    4162 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 13:58:15.470522    4162 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 13:58:15.585137    4162 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 13:58:15.615213    4162 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 13:58:15.615531    4162 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 13:58:15.615554    4162 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 13:58:15.703744    4162 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 13:58:15.707997    4162 out.go:204]   - Booting up control plane ...
	I0311 13:58:15.708047    4162 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 13:58:15.708091    4162 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 13:58:15.708129    4162 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 13:58:15.708173    4162 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 13:58:15.708279    4162 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 13:58:20.214709    4162 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.507804 seconds
	I0311 13:58:20.214787    4162 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 13:58:20.218937    4162 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 13:58:20.727797    4162 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 13:58:20.727912    4162 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-168000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 13:58:21.233059    4162 kubeadm.go:309] [bootstrap-token] Using token: kjz1nx.sgu2ev9wpkj3yzry
	I0311 13:58:21.235678    4162 out.go:204]   - Configuring RBAC rules ...
	I0311 13:58:21.235747    4162 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 13:58:21.235878    4162 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 13:58:21.242368    4162 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 13:58:21.243334    4162 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 13:58:21.244246    4162 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 13:58:21.245358    4162 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 13:58:21.248766    4162 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 13:58:21.434412    4162 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 13:58:21.637560    4162 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 13:58:21.638151    4162 kubeadm.go:309] 
	I0311 13:58:21.638184    4162 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 13:58:21.638187    4162 kubeadm.go:309] 
	I0311 13:58:21.638231    4162 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 13:58:21.638235    4162 kubeadm.go:309] 
	I0311 13:58:21.638249    4162 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 13:58:21.638280    4162 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 13:58:21.638308    4162 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 13:58:21.638314    4162 kubeadm.go:309] 
	I0311 13:58:21.638350    4162 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 13:58:21.638355    4162 kubeadm.go:309] 
	I0311 13:58:21.638387    4162 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 13:58:21.638392    4162 kubeadm.go:309] 
	I0311 13:58:21.638420    4162 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 13:58:21.638460    4162 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 13:58:21.638502    4162 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 13:58:21.638508    4162 kubeadm.go:309] 
	I0311 13:58:21.638549    4162 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 13:58:21.638592    4162 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 13:58:21.638596    4162 kubeadm.go:309] 
	I0311 13:58:21.638644    4162 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token kjz1nx.sgu2ev9wpkj3yzry \
	I0311 13:58:21.638707    4162 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b0f804fee3777fe090204338c70c85586d2b95499c0fea24e08ef3935500f54b \
	I0311 13:58:21.638720    4162 kubeadm.go:309] 	--control-plane 
	I0311 13:58:21.638724    4162 kubeadm.go:309] 
	I0311 13:58:21.638772    4162 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 13:58:21.638775    4162 kubeadm.go:309] 
	I0311 13:58:21.638819    4162 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token kjz1nx.sgu2ev9wpkj3yzry \
	I0311 13:58:21.638875    4162 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b0f804fee3777fe090204338c70c85586d2b95499c0fea24e08ef3935500f54b 
	I0311 13:58:21.638934    4162 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 13:58:21.638940    4162 cni.go:84] Creating CNI manager for ""
	I0311 13:58:21.638947    4162 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 13:58:21.642852    4162 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 13:58:21.649900    4162 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 13:58:21.652937    4162 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 13:58:21.658677    4162 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 13:58:21.658730    4162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:58:21.658730    4162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-168000 minikube.k8s.io/updated_at=2024_03_11T13_58_21_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=running-upgrade-168000 minikube.k8s.io/primary=true
	I0311 13:58:21.696891    4162 ops.go:34] apiserver oom_adj: -16
	I0311 13:58:21.696914    4162 kubeadm.go:1106] duration metric: took 38.233458ms to wait for elevateKubeSystemPrivileges
	W0311 13:58:21.696930    4162 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 13:58:21.696933    4162 kubeadm.go:393] duration metric: took 4m21.224773625s to StartCluster
	I0311 13:58:21.696942    4162 settings.go:142] acquiring lock: {Name:mkde8963c2fec7d8df74a4e81a4ba3233d320136 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:58:21.697007    4162 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:58:21.697394    4162 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/kubeconfig: {Name:mkd61d3fa94ba0392c00bb2cce43bcec89e45a02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:58:21.698104    4162 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 13:58:21.701856    4162 out.go:177] * Verifying Kubernetes components...
	I0311 13:58:21.698180    4162 config.go:182] Loaded profile config "running-upgrade-168000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0311 13:58:21.698170    4162 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 13:58:21.710873    4162 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-168000"
	I0311 13:58:21.710888    4162 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-168000"
	W0311 13:58:21.710891    4162 addons.go:243] addon storage-provisioner should already be in state true
	I0311 13:58:21.710906    4162 host.go:66] Checking if "running-upgrade-168000" exists ...
	I0311 13:58:21.710963    4162 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-168000"
	I0311 13:58:21.710974    4162 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-168000"
	I0311 13:58:21.710976    4162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:58:21.711177    4162 retry.go:31] will retry after 1.119821432s: connect: dial unix /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/running-upgrade-168000/monitor: connect: connection refused
	I0311 13:58:21.712305    4162 kapi.go:59] client config for running-upgrade-168000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/running-upgrade-168000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/running-upgrade-168000/client.key", CAFile:"/Users/jenkins/minikube-integration/18358-1220/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105d37fd0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0311 13:58:21.712436    4162 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-168000"
	W0311 13:58:21.712441    4162 addons.go:243] addon default-storageclass should already be in state true
	I0311 13:58:21.712449    4162 host.go:66] Checking if "running-upgrade-168000" exists ...
	I0311 13:58:21.713190    4162 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 13:58:21.713195    4162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 13:58:21.713200    4162 sshutil.go:53] new ssh client: &{IP:localhost Port:50306 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/running-upgrade-168000/id_rsa Username:docker}
	I0311 13:58:21.804022    4162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 13:58:21.809150    4162 api_server.go:52] waiting for apiserver process to appear ...
	I0311 13:58:21.809194    4162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 13:58:21.813493    4162 api_server.go:72] duration metric: took 115.380167ms to wait for apiserver process to appear ...
	I0311 13:58:21.813501    4162 api_server.go:88] waiting for apiserver healthz status ...
	I0311 13:58:21.813507    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:58:21.841016    4162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 13:58:22.837614    4162 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 13:58:22.841381    4162 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 13:58:22.841392    4162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 13:58:22.841406    4162 sshutil.go:53] new ssh client: &{IP:localhost Port:50306 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/running-upgrade-168000/id_rsa Username:docker}
	I0311 13:58:22.877910    4162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 13:58:26.815502    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:58:26.815541    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:58:31.817364    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:58:31.817434    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:58:36.818048    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:58:36.818081    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:58:41.819115    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:58:41.819156    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:58:46.820512    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:58:46.820548    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:58:51.821819    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:58:51.821857    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0311 13:58:52.149305    4162 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0311 13:58:52.153630    4162 out.go:177] * Enabled addons: storage-provisioner
	I0311 13:58:52.161572    4162 addons.go:505] duration metric: took 30.464431s for enable addons: enabled=[storage-provisioner]
	I0311 13:58:56.823875    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:58:56.823943    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:59:01.826276    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:59:01.826311    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:59:06.828425    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:59:06.828462    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:59:11.830531    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:59:11.830556    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:59:16.832014    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:59:16.832073    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:59:21.834205    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:59:21.834347    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:59:21.844712    4162 logs.go:276] 1 containers: [2b9f9cfef78a]
	I0311 13:59:21.844788    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:59:21.855632    4162 logs.go:276] 1 containers: [9d1c2cec57bc]
	I0311 13:59:21.855706    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:59:21.865957    4162 logs.go:276] 2 containers: [d3c8d3b0c7f1 b9719d23f2f1]
	I0311 13:59:21.866024    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:59:21.876920    4162 logs.go:276] 1 containers: [98ecad162532]
	I0311 13:59:21.876983    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:59:21.887796    4162 logs.go:276] 1 containers: [5832f82ba133]
	I0311 13:59:21.887870    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:59:21.898641    4162 logs.go:276] 1 containers: [97a2f11b555a]
	I0311 13:59:21.898717    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:59:21.908909    4162 logs.go:276] 0 containers: []
	W0311 13:59:21.908925    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:59:21.908985    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:59:21.919556    4162 logs.go:276] 1 containers: [05067d4bae22]
	I0311 13:59:21.919570    4162 logs.go:123] Gathering logs for storage-provisioner [05067d4bae22] ...
	I0311 13:59:21.919575    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05067d4bae22"
	I0311 13:59:21.931638    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:59:21.931648    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:59:21.954638    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:59:21.954647    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:59:21.988896    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:59:21.988907    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:59:21.993542    4162 logs.go:123] Gathering logs for kube-apiserver [2b9f9cfef78a] ...
	I0311 13:59:21.993554    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9f9cfef78a"
	I0311 13:59:22.008984    4162 logs.go:123] Gathering logs for etcd [9d1c2cec57bc] ...
	I0311 13:59:22.008995    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1c2cec57bc"
	I0311 13:59:22.023391    4162 logs.go:123] Gathering logs for coredns [d3c8d3b0c7f1] ...
	I0311 13:59:22.023401    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c8d3b0c7f1"
	I0311 13:59:22.034854    4162 logs.go:123] Gathering logs for coredns [b9719d23f2f1] ...
	I0311 13:59:22.034863    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9719d23f2f1"
	I0311 13:59:22.051081    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:59:22.051090    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:59:22.063007    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:59:22.063018    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:59:22.100252    4162 logs.go:123] Gathering logs for kube-scheduler [98ecad162532] ...
	I0311 13:59:22.100263    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98ecad162532"
	I0311 13:59:22.115960    4162 logs.go:123] Gathering logs for kube-proxy [5832f82ba133] ...
	I0311 13:59:22.115970    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5832f82ba133"
	I0311 13:59:22.128029    4162 logs.go:123] Gathering logs for kube-controller-manager [97a2f11b555a] ...
	I0311 13:59:22.128040    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97a2f11b555a"
	I0311 13:59:24.653612    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:59:29.655807    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:59:29.656096    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:59:29.681043    4162 logs.go:276] 1 containers: [2b9f9cfef78a]
	I0311 13:59:29.681157    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:59:29.697258    4162 logs.go:276] 1 containers: [9d1c2cec57bc]
	I0311 13:59:29.697345    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:59:29.711492    4162 logs.go:276] 2 containers: [d3c8d3b0c7f1 b9719d23f2f1]
	I0311 13:59:29.711581    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:59:29.723908    4162 logs.go:276] 1 containers: [98ecad162532]
	I0311 13:59:29.723978    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:59:29.736536    4162 logs.go:276] 1 containers: [5832f82ba133]
	I0311 13:59:29.736608    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:59:29.750168    4162 logs.go:276] 1 containers: [97a2f11b555a]
	I0311 13:59:29.750241    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:59:29.760788    4162 logs.go:276] 0 containers: []
	W0311 13:59:29.760802    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:59:29.760866    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:59:29.771134    4162 logs.go:276] 1 containers: [05067d4bae22]
	I0311 13:59:29.771150    4162 logs.go:123] Gathering logs for kube-apiserver [2b9f9cfef78a] ...
	I0311 13:59:29.771156    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9f9cfef78a"
	I0311 13:59:29.785469    4162 logs.go:123] Gathering logs for coredns [d3c8d3b0c7f1] ...
	I0311 13:59:29.785480    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c8d3b0c7f1"
	I0311 13:59:29.797459    4162 logs.go:123] Gathering logs for kube-scheduler [98ecad162532] ...
	I0311 13:59:29.797470    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98ecad162532"
	I0311 13:59:29.811440    4162 logs.go:123] Gathering logs for kube-proxy [5832f82ba133] ...
	I0311 13:59:29.811449    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5832f82ba133"
	I0311 13:59:29.823520    4162 logs.go:123] Gathering logs for kube-controller-manager [97a2f11b555a] ...
	I0311 13:59:29.823531    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97a2f11b555a"
	I0311 13:59:29.841181    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:59:29.841192    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:59:29.853080    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:59:29.853093    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:59:29.876526    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:59:29.876537    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:59:29.912167    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:59:29.912180    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:59:29.916919    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:59:29.916925    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:59:29.950771    4162 logs.go:123] Gathering logs for etcd [9d1c2cec57bc] ...
	I0311 13:59:29.950782    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1c2cec57bc"
	I0311 13:59:29.965304    4162 logs.go:123] Gathering logs for coredns [b9719d23f2f1] ...
	I0311 13:59:29.965314    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9719d23f2f1"
	I0311 13:59:29.976689    4162 logs.go:123] Gathering logs for storage-provisioner [05067d4bae22] ...
	I0311 13:59:29.976700    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05067d4bae22"
	I0311 13:59:32.488872    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:59:37.490929    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:59:37.491160    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:59:37.520742    4162 logs.go:276] 1 containers: [2b9f9cfef78a]
	I0311 13:59:37.520849    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:59:37.536198    4162 logs.go:276] 1 containers: [9d1c2cec57bc]
	I0311 13:59:37.536272    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:59:37.548813    4162 logs.go:276] 2 containers: [d3c8d3b0c7f1 b9719d23f2f1]
	I0311 13:59:37.548881    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:59:37.565750    4162 logs.go:276] 1 containers: [98ecad162532]
	I0311 13:59:37.565817    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:59:37.575977    4162 logs.go:276] 1 containers: [5832f82ba133]
	I0311 13:59:37.576042    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:59:37.586545    4162 logs.go:276] 1 containers: [97a2f11b555a]
	I0311 13:59:37.586609    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:59:37.597184    4162 logs.go:276] 0 containers: []
	W0311 13:59:37.597197    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:59:37.597258    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:59:37.608081    4162 logs.go:276] 1 containers: [05067d4bae22]
	I0311 13:59:37.608095    4162 logs.go:123] Gathering logs for kube-apiserver [2b9f9cfef78a] ...
	I0311 13:59:37.608100    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9f9cfef78a"
	I0311 13:59:37.621980    4162 logs.go:123] Gathering logs for coredns [d3c8d3b0c7f1] ...
	I0311 13:59:37.621991    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c8d3b0c7f1"
	I0311 13:59:37.633504    4162 logs.go:123] Gathering logs for kube-proxy [5832f82ba133] ...
	I0311 13:59:37.633517    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5832f82ba133"
	I0311 13:59:37.645108    4162 logs.go:123] Gathering logs for kube-controller-manager [97a2f11b555a] ...
	I0311 13:59:37.645118    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97a2f11b555a"
	I0311 13:59:37.663619    4162 logs.go:123] Gathering logs for storage-provisioner [05067d4bae22] ...
	I0311 13:59:37.663629    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05067d4bae22"
	I0311 13:59:37.674765    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:59:37.674774    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:59:37.686261    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:59:37.686271    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:59:37.722809    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:59:37.722817    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:59:37.761218    4162 logs.go:123] Gathering logs for etcd [9d1c2cec57bc] ...
	I0311 13:59:37.761229    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1c2cec57bc"
	I0311 13:59:37.776350    4162 logs.go:123] Gathering logs for coredns [b9719d23f2f1] ...
	I0311 13:59:37.776361    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9719d23f2f1"
	I0311 13:59:37.788421    4162 logs.go:123] Gathering logs for kube-scheduler [98ecad162532] ...
	I0311 13:59:37.788432    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98ecad162532"
	I0311 13:59:37.803119    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:59:37.803130    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:59:37.827470    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:59:37.827477    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:59:40.334508    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:59:45.335576    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:59:45.335815    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:59:45.350342    4162 logs.go:276] 1 containers: [2b9f9cfef78a]
	I0311 13:59:45.350430    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:59:45.361561    4162 logs.go:276] 1 containers: [9d1c2cec57bc]
	I0311 13:59:45.361627    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:59:45.373320    4162 logs.go:276] 2 containers: [d3c8d3b0c7f1 b9719d23f2f1]
	I0311 13:59:45.373391    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:59:45.383298    4162 logs.go:276] 1 containers: [98ecad162532]
	I0311 13:59:45.383363    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:59:45.393972    4162 logs.go:276] 1 containers: [5832f82ba133]
	I0311 13:59:45.394040    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:59:45.404175    4162 logs.go:276] 1 containers: [97a2f11b555a]
	I0311 13:59:45.404237    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:59:45.414818    4162 logs.go:276] 0 containers: []
	W0311 13:59:45.414854    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:59:45.414919    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:59:45.430694    4162 logs.go:276] 1 containers: [05067d4bae22]
	I0311 13:59:45.430711    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:59:45.430716    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:59:45.466613    4162 logs.go:123] Gathering logs for kube-apiserver [2b9f9cfef78a] ...
	I0311 13:59:45.466627    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9f9cfef78a"
	I0311 13:59:45.481086    4162 logs.go:123] Gathering logs for kube-scheduler [98ecad162532] ...
	I0311 13:59:45.481099    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98ecad162532"
	I0311 13:59:45.495893    4162 logs.go:123] Gathering logs for kube-proxy [5832f82ba133] ...
	I0311 13:59:45.495904    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5832f82ba133"
	I0311 13:59:45.515507    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:59:45.515518    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:59:45.539353    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:59:45.539366    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:59:45.550104    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:59:45.550118    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:59:45.585880    4162 logs.go:123] Gathering logs for etcd [9d1c2cec57bc] ...
	I0311 13:59:45.585891    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1c2cec57bc"
	I0311 13:59:45.600167    4162 logs.go:123] Gathering logs for coredns [d3c8d3b0c7f1] ...
	I0311 13:59:45.600178    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c8d3b0c7f1"
	I0311 13:59:45.612141    4162 logs.go:123] Gathering logs for coredns [b9719d23f2f1] ...
	I0311 13:59:45.612152    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9719d23f2f1"
	I0311 13:59:45.624385    4162 logs.go:123] Gathering logs for kube-controller-manager [97a2f11b555a] ...
	I0311 13:59:45.624396    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97a2f11b555a"
	I0311 13:59:45.642077    4162 logs.go:123] Gathering logs for storage-provisioner [05067d4bae22] ...
	I0311 13:59:45.642088    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05067d4bae22"
	I0311 13:59:45.654100    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:59:45.654110    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:59:48.167545    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:59:53.169724    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:59:53.170028    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:59:53.195197    4162 logs.go:276] 1 containers: [2b9f9cfef78a]
	I0311 13:59:53.195315    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:59:53.213049    4162 logs.go:276] 1 containers: [9d1c2cec57bc]
	I0311 13:59:53.213136    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:59:53.227650    4162 logs.go:276] 2 containers: [d3c8d3b0c7f1 b9719d23f2f1]
	I0311 13:59:53.227726    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:59:53.239332    4162 logs.go:276] 1 containers: [98ecad162532]
	I0311 13:59:53.239401    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:59:53.249612    4162 logs.go:276] 1 containers: [5832f82ba133]
	I0311 13:59:53.249679    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:59:53.259891    4162 logs.go:276] 1 containers: [97a2f11b555a]
	I0311 13:59:53.259957    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:59:53.270462    4162 logs.go:276] 0 containers: []
	W0311 13:59:53.270474    4162 logs.go:278] No container was found matching "kindnet"
	I0311 13:59:53.270535    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:59:53.280691    4162 logs.go:276] 1 containers: [05067d4bae22]
	I0311 13:59:53.280706    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 13:59:53.280712    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:59:53.316450    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 13:59:53.316463    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:59:53.321216    4162 logs.go:123] Gathering logs for etcd [9d1c2cec57bc] ...
	I0311 13:59:53.321222    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1c2cec57bc"
	I0311 13:59:53.335460    4162 logs.go:123] Gathering logs for coredns [d3c8d3b0c7f1] ...
	I0311 13:59:53.335471    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c8d3b0c7f1"
	I0311 13:59:53.348633    4162 logs.go:123] Gathering logs for kube-proxy [5832f82ba133] ...
	I0311 13:59:53.348644    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5832f82ba133"
	I0311 13:59:53.360232    4162 logs.go:123] Gathering logs for Docker ...
	I0311 13:59:53.360248    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:59:53.385210    4162 logs.go:123] Gathering logs for container status ...
	I0311 13:59:53.385220    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:59:53.397150    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:59:53.397161    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:59:53.432142    4162 logs.go:123] Gathering logs for kube-apiserver [2b9f9cfef78a] ...
	I0311 13:59:53.432156    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9f9cfef78a"
	I0311 13:59:53.446852    4162 logs.go:123] Gathering logs for coredns [b9719d23f2f1] ...
	I0311 13:59:53.446863    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9719d23f2f1"
	I0311 13:59:53.460377    4162 logs.go:123] Gathering logs for kube-scheduler [98ecad162532] ...
	I0311 13:59:53.460388    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98ecad162532"
	I0311 13:59:53.475077    4162 logs.go:123] Gathering logs for kube-controller-manager [97a2f11b555a] ...
	I0311 13:59:53.475087    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97a2f11b555a"
	I0311 13:59:53.492693    4162 logs.go:123] Gathering logs for storage-provisioner [05067d4bae22] ...
	I0311 13:59:53.492704    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05067d4bae22"
	I0311 13:59:56.009140    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:00:01.011378    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:00:01.011652    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:00:01.037778    4162 logs.go:276] 1 containers: [2b9f9cfef78a]
	I0311 14:00:01.037903    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:00:01.054425    4162 logs.go:276] 1 containers: [9d1c2cec57bc]
	I0311 14:00:01.054519    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:00:01.073363    4162 logs.go:276] 2 containers: [d3c8d3b0c7f1 b9719d23f2f1]
	I0311 14:00:01.073429    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:00:01.084433    4162 logs.go:276] 1 containers: [98ecad162532]
	I0311 14:00:01.084500    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:00:01.097154    4162 logs.go:276] 1 containers: [5832f82ba133]
	I0311 14:00:01.097222    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:00:01.107345    4162 logs.go:276] 1 containers: [97a2f11b555a]
	I0311 14:00:01.107412    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:00:01.117002    4162 logs.go:276] 0 containers: []
	W0311 14:00:01.117014    4162 logs.go:278] No container was found matching "kindnet"
	I0311 14:00:01.117070    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:00:01.127583    4162 logs.go:276] 1 containers: [05067d4bae22]
	I0311 14:00:01.127599    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:00:01.127605    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:00:01.162210    4162 logs.go:123] Gathering logs for kube-apiserver [2b9f9cfef78a] ...
	I0311 14:00:01.162220    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9f9cfef78a"
	I0311 14:00:01.181882    4162 logs.go:123] Gathering logs for coredns [b9719d23f2f1] ...
	I0311 14:00:01.181894    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9719d23f2f1"
	I0311 14:00:01.193351    4162 logs.go:123] Gathering logs for kube-proxy [5832f82ba133] ...
	I0311 14:00:01.193362    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5832f82ba133"
	I0311 14:00:01.205464    4162 logs.go:123] Gathering logs for kube-controller-manager [97a2f11b555a] ...
	I0311 14:00:01.205476    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97a2f11b555a"
	I0311 14:00:01.222847    4162 logs.go:123] Gathering logs for storage-provisioner [05067d4bae22] ...
	I0311 14:00:01.222859    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05067d4bae22"
	I0311 14:00:01.241476    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 14:00:01.241491    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:00:01.277509    4162 logs.go:123] Gathering logs for etcd [9d1c2cec57bc] ...
	I0311 14:00:01.277522    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1c2cec57bc"
	I0311 14:00:01.294866    4162 logs.go:123] Gathering logs for coredns [d3c8d3b0c7f1] ...
	I0311 14:00:01.294876    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c8d3b0c7f1"
	I0311 14:00:01.310506    4162 logs.go:123] Gathering logs for kube-scheduler [98ecad162532] ...
	I0311 14:00:01.310520    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98ecad162532"
	I0311 14:00:01.324984    4162 logs.go:123] Gathering logs for Docker ...
	I0311 14:00:01.324993    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:00:01.348113    4162 logs.go:123] Gathering logs for container status ...
	I0311 14:00:01.348122    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:00:01.360095    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 14:00:01.360108    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:00:03.866460    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:00:08.868570    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:00:08.868707    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:00:08.885306    4162 logs.go:276] 1 containers: [2b9f9cfef78a]
	I0311 14:00:08.885394    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:00:08.900617    4162 logs.go:276] 1 containers: [9d1c2cec57bc]
	I0311 14:00:08.900692    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:00:08.912704    4162 logs.go:276] 2 containers: [d3c8d3b0c7f1 b9719d23f2f1]
	I0311 14:00:08.912776    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:00:08.923828    4162 logs.go:276] 1 containers: [98ecad162532]
	I0311 14:00:08.923896    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:00:08.934217    4162 logs.go:276] 1 containers: [5832f82ba133]
	I0311 14:00:08.934291    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:00:08.944967    4162 logs.go:276] 1 containers: [97a2f11b555a]
	I0311 14:00:08.945036    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:00:08.955232    4162 logs.go:276] 0 containers: []
	W0311 14:00:08.955245    4162 logs.go:278] No container was found matching "kindnet"
	I0311 14:00:08.955303    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:00:08.965335    4162 logs.go:276] 1 containers: [05067d4bae22]
	I0311 14:00:08.965350    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:00:08.965356    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:00:09.004018    4162 logs.go:123] Gathering logs for coredns [d3c8d3b0c7f1] ...
	I0311 14:00:09.004028    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c8d3b0c7f1"
	I0311 14:00:09.019629    4162 logs.go:123] Gathering logs for kube-scheduler [98ecad162532] ...
	I0311 14:00:09.019643    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98ecad162532"
	I0311 14:00:09.036188    4162 logs.go:123] Gathering logs for kube-proxy [5832f82ba133] ...
	I0311 14:00:09.036199    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5832f82ba133"
	I0311 14:00:09.048337    4162 logs.go:123] Gathering logs for Docker ...
	I0311 14:00:09.048348    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:00:09.072137    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 14:00:09.072149    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:00:09.108619    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 14:00:09.108628    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:00:09.113626    4162 logs.go:123] Gathering logs for kube-apiserver [2b9f9cfef78a] ...
	I0311 14:00:09.113635    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9f9cfef78a"
	I0311 14:00:09.128216    4162 logs.go:123] Gathering logs for etcd [9d1c2cec57bc] ...
	I0311 14:00:09.128229    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1c2cec57bc"
	I0311 14:00:09.142280    4162 logs.go:123] Gathering logs for coredns [b9719d23f2f1] ...
	I0311 14:00:09.142291    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9719d23f2f1"
	I0311 14:00:09.154086    4162 logs.go:123] Gathering logs for kube-controller-manager [97a2f11b555a] ...
	I0311 14:00:09.154096    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97a2f11b555a"
	I0311 14:00:09.171028    4162 logs.go:123] Gathering logs for storage-provisioner [05067d4bae22] ...
	I0311 14:00:09.171040    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05067d4bae22"
	I0311 14:00:09.182539    4162 logs.go:123] Gathering logs for container status ...
	I0311 14:00:09.182553    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:00:11.696576    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:00:16.698662    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:00:16.698819    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:00:16.712144    4162 logs.go:276] 1 containers: [2b9f9cfef78a]
	I0311 14:00:16.712225    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:00:16.722958    4162 logs.go:276] 1 containers: [9d1c2cec57bc]
	I0311 14:00:16.723031    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:00:16.733399    4162 logs.go:276] 2 containers: [d3c8d3b0c7f1 b9719d23f2f1]
	I0311 14:00:16.733465    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:00:16.744024    4162 logs.go:276] 1 containers: [98ecad162532]
	I0311 14:00:16.744100    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:00:16.755291    4162 logs.go:276] 1 containers: [5832f82ba133]
	I0311 14:00:16.755363    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:00:16.772267    4162 logs.go:276] 1 containers: [97a2f11b555a]
	I0311 14:00:16.772342    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:00:16.790906    4162 logs.go:276] 0 containers: []
	W0311 14:00:16.790920    4162 logs.go:278] No container was found matching "kindnet"
	I0311 14:00:16.790991    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:00:16.801293    4162 logs.go:276] 1 containers: [05067d4bae22]
	I0311 14:00:16.801308    4162 logs.go:123] Gathering logs for coredns [b9719d23f2f1] ...
	I0311 14:00:16.801313    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9719d23f2f1"
	I0311 14:00:16.812672    4162 logs.go:123] Gathering logs for kube-scheduler [98ecad162532] ...
	I0311 14:00:16.812682    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98ecad162532"
	I0311 14:00:16.827089    4162 logs.go:123] Gathering logs for kube-controller-manager [97a2f11b555a] ...
	I0311 14:00:16.827098    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97a2f11b555a"
	I0311 14:00:16.845247    4162 logs.go:123] Gathering logs for storage-provisioner [05067d4bae22] ...
	I0311 14:00:16.845259    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05067d4bae22"
	I0311 14:00:16.857203    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 14:00:16.857218    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:00:16.892234    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 14:00:16.892244    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:00:16.896592    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:00:16.896598    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:00:16.931691    4162 logs.go:123] Gathering logs for coredns [d3c8d3b0c7f1] ...
	I0311 14:00:16.931701    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c8d3b0c7f1"
	I0311 14:00:16.943288    4162 logs.go:123] Gathering logs for Docker ...
	I0311 14:00:16.943298    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:00:16.967178    4162 logs.go:123] Gathering logs for container status ...
	I0311 14:00:16.967184    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:00:16.978289    4162 logs.go:123] Gathering logs for kube-apiserver [2b9f9cfef78a] ...
	I0311 14:00:16.978298    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9f9cfef78a"
	I0311 14:00:16.992883    4162 logs.go:123] Gathering logs for etcd [9d1c2cec57bc] ...
	I0311 14:00:16.992893    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1c2cec57bc"
	I0311 14:00:17.006640    4162 logs.go:123] Gathering logs for kube-proxy [5832f82ba133] ...
	I0311 14:00:17.006650    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5832f82ba133"
	I0311 14:00:19.520055    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:00:24.522052    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:00:24.522175    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:00:24.537762    4162 logs.go:276] 1 containers: [2b9f9cfef78a]
	I0311 14:00:24.537848    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:00:24.549640    4162 logs.go:276] 1 containers: [9d1c2cec57bc]
	I0311 14:00:24.549712    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:00:24.560680    4162 logs.go:276] 2 containers: [d3c8d3b0c7f1 b9719d23f2f1]
	I0311 14:00:24.560750    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:00:24.571419    4162 logs.go:276] 1 containers: [98ecad162532]
	I0311 14:00:24.571487    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:00:24.581772    4162 logs.go:276] 1 containers: [5832f82ba133]
	I0311 14:00:24.581835    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:00:24.596478    4162 logs.go:276] 1 containers: [97a2f11b555a]
	I0311 14:00:24.596548    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:00:24.606419    4162 logs.go:276] 0 containers: []
	W0311 14:00:24.606432    4162 logs.go:278] No container was found matching "kindnet"
	I0311 14:00:24.606489    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:00:24.617101    4162 logs.go:276] 1 containers: [05067d4bae22]
	I0311 14:00:24.617115    4162 logs.go:123] Gathering logs for kube-controller-manager [97a2f11b555a] ...
	I0311 14:00:24.617120    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97a2f11b555a"
	I0311 14:00:24.634446    4162 logs.go:123] Gathering logs for container status ...
	I0311 14:00:24.634457    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:00:24.645963    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 14:00:24.645974    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:00:24.681942    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 14:00:24.681953    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:00:24.686390    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:00:24.686397    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:00:24.731293    4162 logs.go:123] Gathering logs for kube-apiserver [2b9f9cfef78a] ...
	I0311 14:00:24.731307    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9f9cfef78a"
	I0311 14:00:24.746110    4162 logs.go:123] Gathering logs for coredns [d3c8d3b0c7f1] ...
	I0311 14:00:24.746121    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c8d3b0c7f1"
	I0311 14:00:24.761727    4162 logs.go:123] Gathering logs for coredns [b9719d23f2f1] ...
	I0311 14:00:24.761739    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9719d23f2f1"
	I0311 14:00:24.773645    4162 logs.go:123] Gathering logs for etcd [9d1c2cec57bc] ...
	I0311 14:00:24.773658    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1c2cec57bc"
	I0311 14:00:24.787440    4162 logs.go:123] Gathering logs for kube-scheduler [98ecad162532] ...
	I0311 14:00:24.787452    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98ecad162532"
	I0311 14:00:24.802354    4162 logs.go:123] Gathering logs for kube-proxy [5832f82ba133] ...
	I0311 14:00:24.802364    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5832f82ba133"
	I0311 14:00:24.814244    4162 logs.go:123] Gathering logs for storage-provisioner [05067d4bae22] ...
	I0311 14:00:24.814253    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05067d4bae22"
	I0311 14:00:24.825678    4162 logs.go:123] Gathering logs for Docker ...
	I0311 14:00:24.825687    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:00:27.351689    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:00:32.353838    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:00:32.354021    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:00:32.374019    4162 logs.go:276] 1 containers: [2b9f9cfef78a]
	I0311 14:00:32.374110    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:00:32.388123    4162 logs.go:276] 1 containers: [9d1c2cec57bc]
	I0311 14:00:32.388207    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:00:32.400677    4162 logs.go:276] 2 containers: [d3c8d3b0c7f1 b9719d23f2f1]
	I0311 14:00:32.400750    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:00:32.416455    4162 logs.go:276] 1 containers: [98ecad162532]
	I0311 14:00:32.416519    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:00:32.427125    4162 logs.go:276] 1 containers: [5832f82ba133]
	I0311 14:00:32.427196    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:00:32.438066    4162 logs.go:276] 1 containers: [97a2f11b555a]
	I0311 14:00:32.438132    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:00:32.452191    4162 logs.go:276] 0 containers: []
	W0311 14:00:32.452205    4162 logs.go:278] No container was found matching "kindnet"
	I0311 14:00:32.452267    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:00:32.462843    4162 logs.go:276] 1 containers: [05067d4bae22]
	I0311 14:00:32.462859    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 14:00:32.462864    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:00:32.502240    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:00:32.502257    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:00:32.590341    4162 logs.go:123] Gathering logs for coredns [d3c8d3b0c7f1] ...
	I0311 14:00:32.590356    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c8d3b0c7f1"
	I0311 14:00:32.602109    4162 logs.go:123] Gathering logs for kube-proxy [5832f82ba133] ...
	I0311 14:00:32.602120    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5832f82ba133"
	I0311 14:00:32.615568    4162 logs.go:123] Gathering logs for storage-provisioner [05067d4bae22] ...
	I0311 14:00:32.615579    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05067d4bae22"
	I0311 14:00:32.627121    4162 logs.go:123] Gathering logs for kube-controller-manager [97a2f11b555a] ...
	I0311 14:00:32.627133    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97a2f11b555a"
	I0311 14:00:32.650083    4162 logs.go:123] Gathering logs for Docker ...
	I0311 14:00:32.650097    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:00:32.675350    4162 logs.go:123] Gathering logs for container status ...
	I0311 14:00:32.675357    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:00:32.687039    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 14:00:32.687054    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:00:32.691480    4162 logs.go:123] Gathering logs for kube-apiserver [2b9f9cfef78a] ...
	I0311 14:00:32.691489    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9f9cfef78a"
	I0311 14:00:32.707056    4162 logs.go:123] Gathering logs for etcd [9d1c2cec57bc] ...
	I0311 14:00:32.707069    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1c2cec57bc"
	I0311 14:00:32.720926    4162 logs.go:123] Gathering logs for coredns [b9719d23f2f1] ...
	I0311 14:00:32.720936    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9719d23f2f1"
	I0311 14:00:32.732664    4162 logs.go:123] Gathering logs for kube-scheduler [98ecad162532] ...
	I0311 14:00:32.732672    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98ecad162532"
	I0311 14:00:35.249236    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:00:40.251255    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:00:40.251466    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:00:40.271144    4162 logs.go:276] 1 containers: [2b9f9cfef78a]
	I0311 14:00:40.271247    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:00:40.285526    4162 logs.go:276] 1 containers: [9d1c2cec57bc]
	I0311 14:00:40.285610    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:00:40.297791    4162 logs.go:276] 4 containers: [a924962f30f8 41f9ce40851e d3c8d3b0c7f1 b9719d23f2f1]
	I0311 14:00:40.297863    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:00:40.307976    4162 logs.go:276] 1 containers: [98ecad162532]
	I0311 14:00:40.308035    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:00:40.318254    4162 logs.go:276] 1 containers: [5832f82ba133]
	I0311 14:00:40.318319    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:00:40.333332    4162 logs.go:276] 1 containers: [97a2f11b555a]
	I0311 14:00:40.333405    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:00:40.347796    4162 logs.go:276] 0 containers: []
	W0311 14:00:40.347807    4162 logs.go:278] No container was found matching "kindnet"
	I0311 14:00:40.347870    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:00:40.358283    4162 logs.go:276] 1 containers: [05067d4bae22]
	I0311 14:00:40.358300    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 14:00:40.358306    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:00:40.363272    4162 logs.go:123] Gathering logs for storage-provisioner [05067d4bae22] ...
	I0311 14:00:40.363278    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05067d4bae22"
	I0311 14:00:40.374861    4162 logs.go:123] Gathering logs for Docker ...
	I0311 14:00:40.374874    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:00:40.399728    4162 logs.go:123] Gathering logs for etcd [9d1c2cec57bc] ...
	I0311 14:00:40.399735    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1c2cec57bc"
	I0311 14:00:40.414065    4162 logs.go:123] Gathering logs for coredns [41f9ce40851e] ...
	I0311 14:00:40.414075    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41f9ce40851e"
	I0311 14:00:40.425387    4162 logs.go:123] Gathering logs for kube-scheduler [98ecad162532] ...
	I0311 14:00:40.425398    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98ecad162532"
	I0311 14:00:40.440262    4162 logs.go:123] Gathering logs for kube-controller-manager [97a2f11b555a] ...
	I0311 14:00:40.440273    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97a2f11b555a"
	I0311 14:00:40.458469    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 14:00:40.458480    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:00:40.493948    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:00:40.493956    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:00:40.532956    4162 logs.go:123] Gathering logs for kube-apiserver [2b9f9cfef78a] ...
	I0311 14:00:40.532966    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9f9cfef78a"
	I0311 14:00:40.549035    4162 logs.go:123] Gathering logs for coredns [a924962f30f8] ...
	I0311 14:00:40.549045    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a924962f30f8"
	I0311 14:00:40.560748    4162 logs.go:123] Gathering logs for coredns [d3c8d3b0c7f1] ...
	I0311 14:00:40.560760    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c8d3b0c7f1"
	I0311 14:00:40.572319    4162 logs.go:123] Gathering logs for coredns [b9719d23f2f1] ...
	I0311 14:00:40.572334    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9719d23f2f1"
	I0311 14:00:40.583689    4162 logs.go:123] Gathering logs for kube-proxy [5832f82ba133] ...
	I0311 14:00:40.583700    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5832f82ba133"
	I0311 14:00:40.600613    4162 logs.go:123] Gathering logs for container status ...
	I0311 14:00:40.600625    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:00:43.115154    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:00:48.117364    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:00:48.117779    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:00:48.130255    4162 logs.go:276] 1 containers: [2b9f9cfef78a]
	I0311 14:00:48.130346    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:00:48.145142    4162 logs.go:276] 1 containers: [9d1c2cec57bc]
	I0311 14:00:48.145215    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:00:48.159784    4162 logs.go:276] 4 containers: [a924962f30f8 41f9ce40851e d3c8d3b0c7f1 b9719d23f2f1]
	I0311 14:00:48.159855    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:00:48.169684    4162 logs.go:276] 1 containers: [98ecad162532]
	I0311 14:00:48.169749    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:00:48.180537    4162 logs.go:276] 1 containers: [5832f82ba133]
	I0311 14:00:48.180604    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:00:48.191425    4162 logs.go:276] 1 containers: [97a2f11b555a]
	I0311 14:00:48.191495    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:00:48.201956    4162 logs.go:276] 0 containers: []
	W0311 14:00:48.201967    4162 logs.go:278] No container was found matching "kindnet"
	I0311 14:00:48.202023    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:00:48.212495    4162 logs.go:276] 1 containers: [05067d4bae22]
	I0311 14:00:48.212514    4162 logs.go:123] Gathering logs for kube-controller-manager [97a2f11b555a] ...
	I0311 14:00:48.212518    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97a2f11b555a"
	I0311 14:00:48.233557    4162 logs.go:123] Gathering logs for kube-apiserver [2b9f9cfef78a] ...
	I0311 14:00:48.233566    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9f9cfef78a"
	I0311 14:00:48.249296    4162 logs.go:123] Gathering logs for kube-proxy [5832f82ba133] ...
	I0311 14:00:48.249307    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5832f82ba133"
	I0311 14:00:48.266061    4162 logs.go:123] Gathering logs for storage-provisioner [05067d4bae22] ...
	I0311 14:00:48.266072    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05067d4bae22"
	I0311 14:00:48.277830    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 14:00:48.277840    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:00:48.282998    4162 logs.go:123] Gathering logs for etcd [9d1c2cec57bc] ...
	I0311 14:00:48.283007    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1c2cec57bc"
	I0311 14:00:48.296700    4162 logs.go:123] Gathering logs for coredns [a924962f30f8] ...
	I0311 14:00:48.296713    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a924962f30f8"
	I0311 14:00:48.309148    4162 logs.go:123] Gathering logs for coredns [41f9ce40851e] ...
	I0311 14:00:48.309159    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41f9ce40851e"
	I0311 14:00:48.320398    4162 logs.go:123] Gathering logs for coredns [d3c8d3b0c7f1] ...
	I0311 14:00:48.320411    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c8d3b0c7f1"
	I0311 14:00:48.331712    4162 logs.go:123] Gathering logs for kube-scheduler [98ecad162532] ...
	I0311 14:00:48.331725    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98ecad162532"
	I0311 14:00:48.346501    4162 logs.go:123] Gathering logs for Docker ...
	I0311 14:00:48.346510    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:00:48.369979    4162 logs.go:123] Gathering logs for container status ...
	I0311 14:00:48.369986    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:00:48.381650    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 14:00:48.381663    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:00:48.418381    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:00:48.418390    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:00:48.455785    4162 logs.go:123] Gathering logs for coredns [b9719d23f2f1] ...
	I0311 14:00:48.455797    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9719d23f2f1"
	I0311 14:00:50.970083    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:00:55.972324    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:00:55.972510    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:00:55.989188    4162 logs.go:276] 1 containers: [2b9f9cfef78a]
	I0311 14:00:55.989278    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:00:56.002200    4162 logs.go:276] 1 containers: [9d1c2cec57bc]
	I0311 14:00:56.002271    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:00:56.014064    4162 logs.go:276] 4 containers: [a924962f30f8 41f9ce40851e d3c8d3b0c7f1 b9719d23f2f1]
	I0311 14:00:56.014135    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:00:56.024863    4162 logs.go:276] 1 containers: [98ecad162532]
	I0311 14:00:56.024938    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:00:56.035176    4162 logs.go:276] 1 containers: [5832f82ba133]
	I0311 14:00:56.035244    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:00:56.045986    4162 logs.go:276] 1 containers: [97a2f11b555a]
	I0311 14:00:56.046058    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:00:56.055981    4162 logs.go:276] 0 containers: []
	W0311 14:00:56.055991    4162 logs.go:278] No container was found matching "kindnet"
	I0311 14:00:56.056050    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:00:56.066345    4162 logs.go:276] 1 containers: [05067d4bae22]
	I0311 14:00:56.066363    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:00:56.066368    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:00:56.104463    4162 logs.go:123] Gathering logs for coredns [41f9ce40851e] ...
	I0311 14:00:56.104474    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41f9ce40851e"
	I0311 14:00:56.116410    4162 logs.go:123] Gathering logs for storage-provisioner [05067d4bae22] ...
	I0311 14:00:56.116420    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05067d4bae22"
	I0311 14:00:56.127863    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 14:00:56.127873    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:00:56.132534    4162 logs.go:123] Gathering logs for kube-apiserver [2b9f9cfef78a] ...
	I0311 14:00:56.132544    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9f9cfef78a"
	I0311 14:00:56.147091    4162 logs.go:123] Gathering logs for etcd [9d1c2cec57bc] ...
	I0311 14:00:56.147103    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1c2cec57bc"
	I0311 14:00:56.161222    4162 logs.go:123] Gathering logs for Docker ...
	I0311 14:00:56.161232    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:00:56.185041    4162 logs.go:123] Gathering logs for container status ...
	I0311 14:00:56.185052    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:00:56.196939    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 14:00:56.196950    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:00:56.232295    4162 logs.go:123] Gathering logs for coredns [d3c8d3b0c7f1] ...
	I0311 14:00:56.232304    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c8d3b0c7f1"
	I0311 14:00:56.244282    4162 logs.go:123] Gathering logs for coredns [b9719d23f2f1] ...
	I0311 14:00:56.244292    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9719d23f2f1"
	I0311 14:00:56.255496    4162 logs.go:123] Gathering logs for kube-scheduler [98ecad162532] ...
	I0311 14:00:56.255505    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98ecad162532"
	I0311 14:00:56.269887    4162 logs.go:123] Gathering logs for coredns [a924962f30f8] ...
	I0311 14:00:56.269897    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a924962f30f8"
	I0311 14:00:56.281846    4162 logs.go:123] Gathering logs for kube-controller-manager [97a2f11b555a] ...
	I0311 14:00:56.281858    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97a2f11b555a"
	I0311 14:00:56.300180    4162 logs.go:123] Gathering logs for kube-proxy [5832f82ba133] ...
	I0311 14:00:56.300192    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5832f82ba133"
	I0311 14:00:58.813414    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:01:03.814804    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:01:03.814987    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:01:03.831662    4162 logs.go:276] 1 containers: [2b9f9cfef78a]
	I0311 14:01:03.831738    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:01:03.844408    4162 logs.go:276] 1 containers: [9d1c2cec57bc]
	I0311 14:01:03.844489    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:01:03.855726    4162 logs.go:276] 4 containers: [a924962f30f8 41f9ce40851e d3c8d3b0c7f1 b9719d23f2f1]
	I0311 14:01:03.855796    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:01:03.869082    4162 logs.go:276] 1 containers: [98ecad162532]
	I0311 14:01:03.869155    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:01:03.880033    4162 logs.go:276] 1 containers: [5832f82ba133]
	I0311 14:01:03.880096    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:01:03.890760    4162 logs.go:276] 1 containers: [97a2f11b555a]
	I0311 14:01:03.890829    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:01:03.901202    4162 logs.go:276] 0 containers: []
	W0311 14:01:03.901218    4162 logs.go:278] No container was found matching "kindnet"
	I0311 14:01:03.901278    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:01:03.916079    4162 logs.go:276] 1 containers: [05067d4bae22]
	I0311 14:01:03.916097    4162 logs.go:123] Gathering logs for kube-controller-manager [97a2f11b555a] ...
	I0311 14:01:03.916102    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97a2f11b555a"
	I0311 14:01:03.933593    4162 logs.go:123] Gathering logs for kube-scheduler [98ecad162532] ...
	I0311 14:01:03.933606    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98ecad162532"
	I0311 14:01:03.948769    4162 logs.go:123] Gathering logs for kube-proxy [5832f82ba133] ...
	I0311 14:01:03.948779    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5832f82ba133"
	I0311 14:01:03.960965    4162 logs.go:123] Gathering logs for storage-provisioner [05067d4bae22] ...
	I0311 14:01:03.960978    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05067d4bae22"
	I0311 14:01:03.972173    4162 logs.go:123] Gathering logs for Docker ...
	I0311 14:01:03.972181    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:01:03.995981    4162 logs.go:123] Gathering logs for container status ...
	I0311 14:01:03.995989    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:01:04.007400    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 14:01:04.007411    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:01:04.012072    4162 logs.go:123] Gathering logs for coredns [d3c8d3b0c7f1] ...
	I0311 14:01:04.012081    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c8d3b0c7f1"
	I0311 14:01:04.023695    4162 logs.go:123] Gathering logs for etcd [9d1c2cec57bc] ...
	I0311 14:01:04.023705    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1c2cec57bc"
	I0311 14:01:04.037934    4162 logs.go:123] Gathering logs for coredns [a924962f30f8] ...
	I0311 14:01:04.037945    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a924962f30f8"
	I0311 14:01:04.049436    4162 logs.go:123] Gathering logs for coredns [b9719d23f2f1] ...
	I0311 14:01:04.049447    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9719d23f2f1"
	I0311 14:01:04.060866    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:01:04.060876    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:01:04.096229    4162 logs.go:123] Gathering logs for kube-apiserver [2b9f9cfef78a] ...
	I0311 14:01:04.096241    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9f9cfef78a"
	I0311 14:01:04.110634    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 14:01:04.110645    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:01:04.146581    4162 logs.go:123] Gathering logs for coredns [41f9ce40851e] ...
	I0311 14:01:04.146598    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41f9ce40851e"
	I0311 14:01:06.660465    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:01:11.662738    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:01:11.662916    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:01:11.677689    4162 logs.go:276] 1 containers: [2b9f9cfef78a]
	I0311 14:01:11.677782    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:01:11.693128    4162 logs.go:276] 1 containers: [9d1c2cec57bc]
	I0311 14:01:11.693202    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:01:11.704198    4162 logs.go:276] 4 containers: [a924962f30f8 41f9ce40851e d3c8d3b0c7f1 b9719d23f2f1]
	I0311 14:01:11.704269    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:01:11.714838    4162 logs.go:276] 1 containers: [98ecad162532]
	I0311 14:01:11.714913    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:01:11.725159    4162 logs.go:276] 1 containers: [5832f82ba133]
	I0311 14:01:11.725226    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:01:11.736167    4162 logs.go:276] 1 containers: [97a2f11b555a]
	I0311 14:01:11.736251    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:01:11.751418    4162 logs.go:276] 0 containers: []
	W0311 14:01:11.751431    4162 logs.go:278] No container was found matching "kindnet"
	I0311 14:01:11.751489    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:01:11.762501    4162 logs.go:276] 1 containers: [05067d4bae22]
	I0311 14:01:11.762515    4162 logs.go:123] Gathering logs for kube-apiserver [2b9f9cfef78a] ...
	I0311 14:01:11.762520    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9f9cfef78a"
	I0311 14:01:11.776519    4162 logs.go:123] Gathering logs for coredns [41f9ce40851e] ...
	I0311 14:01:11.776532    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41f9ce40851e"
	I0311 14:01:11.788482    4162 logs.go:123] Gathering logs for Docker ...
	I0311 14:01:11.788493    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:01:11.813248    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 14:01:11.813259    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:01:11.818045    4162 logs.go:123] Gathering logs for kube-scheduler [98ecad162532] ...
	I0311 14:01:11.818051    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98ecad162532"
	I0311 14:01:11.832664    4162 logs.go:123] Gathering logs for kube-controller-manager [97a2f11b555a] ...
	I0311 14:01:11.832675    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97a2f11b555a"
	I0311 14:01:11.853991    4162 logs.go:123] Gathering logs for storage-provisioner [05067d4bae22] ...
	I0311 14:01:11.854003    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05067d4bae22"
	I0311 14:01:11.866294    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:01:11.866304    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:01:11.902074    4162 logs.go:123] Gathering logs for coredns [d3c8d3b0c7f1] ...
	I0311 14:01:11.902086    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c8d3b0c7f1"
	I0311 14:01:11.914039    4162 logs.go:123] Gathering logs for kube-proxy [5832f82ba133] ...
	I0311 14:01:11.914051    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5832f82ba133"
	I0311 14:01:11.925410    4162 logs.go:123] Gathering logs for container status ...
	I0311 14:01:11.925421    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:01:11.937752    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 14:01:11.937763    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:01:11.973765    4162 logs.go:123] Gathering logs for etcd [9d1c2cec57bc] ...
	I0311 14:01:11.973777    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1c2cec57bc"
	I0311 14:01:11.988086    4162 logs.go:123] Gathering logs for coredns [a924962f30f8] ...
	I0311 14:01:11.988096    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a924962f30f8"
	I0311 14:01:11.999581    4162 logs.go:123] Gathering logs for coredns [b9719d23f2f1] ...
	I0311 14:01:11.999592    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9719d23f2f1"
	I0311 14:01:14.513558    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:01:19.515828    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:01:19.515992    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:01:19.528775    4162 logs.go:276] 1 containers: [2b9f9cfef78a]
	I0311 14:01:19.528843    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:01:19.539561    4162 logs.go:276] 1 containers: [9d1c2cec57bc]
	I0311 14:01:19.539631    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:01:19.550803    4162 logs.go:276] 4 containers: [a924962f30f8 41f9ce40851e d3c8d3b0c7f1 b9719d23f2f1]
	I0311 14:01:19.550876    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:01:19.562050    4162 logs.go:276] 1 containers: [98ecad162532]
	I0311 14:01:19.562121    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:01:19.572761    4162 logs.go:276] 1 containers: [5832f82ba133]
	I0311 14:01:19.572830    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:01:19.583497    4162 logs.go:276] 1 containers: [97a2f11b555a]
	I0311 14:01:19.583563    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:01:19.594231    4162 logs.go:276] 0 containers: []
	W0311 14:01:19.594243    4162 logs.go:278] No container was found matching "kindnet"
	I0311 14:01:19.594303    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:01:19.605152    4162 logs.go:276] 1 containers: [05067d4bae22]
	I0311 14:01:19.605169    4162 logs.go:123] Gathering logs for kube-apiserver [2b9f9cfef78a] ...
	I0311 14:01:19.605175    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9f9cfef78a"
	I0311 14:01:19.619585    4162 logs.go:123] Gathering logs for kube-proxy [5832f82ba133] ...
	I0311 14:01:19.619597    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5832f82ba133"
	I0311 14:01:19.631445    4162 logs.go:123] Gathering logs for kube-controller-manager [97a2f11b555a] ...
	I0311 14:01:19.631456    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97a2f11b555a"
	I0311 14:01:19.648853    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 14:01:19.648862    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:01:19.653733    4162 logs.go:123] Gathering logs for etcd [9d1c2cec57bc] ...
	I0311 14:01:19.653741    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1c2cec57bc"
	I0311 14:01:19.667626    4162 logs.go:123] Gathering logs for kube-scheduler [98ecad162532] ...
	I0311 14:01:19.667635    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98ecad162532"
	I0311 14:01:19.683374    4162 logs.go:123] Gathering logs for Docker ...
	I0311 14:01:19.683385    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:01:19.706880    4162 logs.go:123] Gathering logs for container status ...
	I0311 14:01:19.706890    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:01:19.718786    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 14:01:19.718795    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:01:19.754110    4162 logs.go:123] Gathering logs for coredns [a924962f30f8] ...
	I0311 14:01:19.754124    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a924962f30f8"
	I0311 14:01:19.765437    4162 logs.go:123] Gathering logs for coredns [41f9ce40851e] ...
	I0311 14:01:19.765447    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41f9ce40851e"
	I0311 14:01:19.780076    4162 logs.go:123] Gathering logs for coredns [d3c8d3b0c7f1] ...
	I0311 14:01:19.780088    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c8d3b0c7f1"
	I0311 14:01:19.800461    4162 logs.go:123] Gathering logs for coredns [b9719d23f2f1] ...
	I0311 14:01:19.800472    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9719d23f2f1"
	I0311 14:01:19.811664    4162 logs.go:123] Gathering logs for storage-provisioner [05067d4bae22] ...
	I0311 14:01:19.811673    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05067d4bae22"
	I0311 14:01:19.823151    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:01:19.823163    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:01:22.360098    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:01:27.362223    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:01:27.362375    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:01:27.377850    4162 logs.go:276] 1 containers: [2b9f9cfef78a]
	I0311 14:01:27.377935    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:01:27.390824    4162 logs.go:276] 1 containers: [9d1c2cec57bc]
	I0311 14:01:27.390903    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:01:27.401636    4162 logs.go:276] 4 containers: [a924962f30f8 41f9ce40851e d3c8d3b0c7f1 b9719d23f2f1]
	I0311 14:01:27.401708    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:01:27.412989    4162 logs.go:276] 1 containers: [98ecad162532]
	I0311 14:01:27.413059    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:01:27.424015    4162 logs.go:276] 1 containers: [5832f82ba133]
	I0311 14:01:27.424081    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:01:27.435092    4162 logs.go:276] 1 containers: [97a2f11b555a]
	I0311 14:01:27.435156    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:01:27.445005    4162 logs.go:276] 0 containers: []
	W0311 14:01:27.445018    4162 logs.go:278] No container was found matching "kindnet"
	I0311 14:01:27.445072    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:01:27.457241    4162 logs.go:276] 1 containers: [05067d4bae22]
	I0311 14:01:27.457260    4162 logs.go:123] Gathering logs for etcd [9d1c2cec57bc] ...
	I0311 14:01:27.457265    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1c2cec57bc"
	I0311 14:01:27.471421    4162 logs.go:123] Gathering logs for container status ...
	I0311 14:01:27.471433    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:01:27.483783    4162 logs.go:123] Gathering logs for kube-apiserver [2b9f9cfef78a] ...
	I0311 14:01:27.483794    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9f9cfef78a"
	I0311 14:01:27.499470    4162 logs.go:123] Gathering logs for coredns [41f9ce40851e] ...
	I0311 14:01:27.499479    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41f9ce40851e"
	I0311 14:01:27.515202    4162 logs.go:123] Gathering logs for coredns [d3c8d3b0c7f1] ...
	I0311 14:01:27.515213    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c8d3b0c7f1"
	I0311 14:01:27.526957    4162 logs.go:123] Gathering logs for kube-scheduler [98ecad162532] ...
	I0311 14:01:27.526966    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98ecad162532"
	I0311 14:01:27.541409    4162 logs.go:123] Gathering logs for kube-controller-manager [97a2f11b555a] ...
	I0311 14:01:27.541419    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97a2f11b555a"
	I0311 14:01:27.558720    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 14:01:27.558730    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:01:27.593958    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 14:01:27.593969    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:01:27.598364    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:01:27.598372    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:01:27.633137    4162 logs.go:123] Gathering logs for coredns [b9719d23f2f1] ...
	I0311 14:01:27.633149    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9719d23f2f1"
	I0311 14:01:27.645801    4162 logs.go:123] Gathering logs for coredns [a924962f30f8] ...
	I0311 14:01:27.645812    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a924962f30f8"
	I0311 14:01:27.659024    4162 logs.go:123] Gathering logs for kube-proxy [5832f82ba133] ...
	I0311 14:01:27.659035    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5832f82ba133"
	I0311 14:01:27.671105    4162 logs.go:123] Gathering logs for storage-provisioner [05067d4bae22] ...
	I0311 14:01:27.671117    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05067d4bae22"
	I0311 14:01:27.682442    4162 logs.go:123] Gathering logs for Docker ...
	I0311 14:01:27.682453    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:01:30.209036    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:01:35.211267    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:01:35.211415    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:01:35.229817    4162 logs.go:276] 1 containers: [2b9f9cfef78a]
	I0311 14:01:35.229904    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:01:35.241956    4162 logs.go:276] 1 containers: [9d1c2cec57bc]
	I0311 14:01:35.242029    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:01:35.256500    4162 logs.go:276] 4 containers: [a924962f30f8 41f9ce40851e d3c8d3b0c7f1 b9719d23f2f1]
	I0311 14:01:35.256594    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:01:35.266853    4162 logs.go:276] 1 containers: [98ecad162532]
	I0311 14:01:35.266937    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:01:35.277599    4162 logs.go:276] 1 containers: [5832f82ba133]
	I0311 14:01:35.277674    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:01:35.288222    4162 logs.go:276] 1 containers: [97a2f11b555a]
	I0311 14:01:35.288308    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:01:35.298939    4162 logs.go:276] 0 containers: []
	W0311 14:01:35.298949    4162 logs.go:278] No container was found matching "kindnet"
	I0311 14:01:35.299003    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:01:35.309657    4162 logs.go:276] 1 containers: [05067d4bae22]
	I0311 14:01:35.309677    4162 logs.go:123] Gathering logs for coredns [b9719d23f2f1] ...
	I0311 14:01:35.309682    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9719d23f2f1"
	I0311 14:01:35.326435    4162 logs.go:123] Gathering logs for kube-proxy [5832f82ba133] ...
	I0311 14:01:35.326447    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5832f82ba133"
	I0311 14:01:35.339107    4162 logs.go:123] Gathering logs for kube-controller-manager [97a2f11b555a] ...
	I0311 14:01:35.339119    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97a2f11b555a"
	I0311 14:01:35.364275    4162 logs.go:123] Gathering logs for Docker ...
	I0311 14:01:35.364285    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:01:35.387748    4162 logs.go:123] Gathering logs for container status ...
	I0311 14:01:35.387759    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:01:35.399980    4162 logs.go:123] Gathering logs for coredns [41f9ce40851e] ...
	I0311 14:01:35.399991    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41f9ce40851e"
	I0311 14:01:35.411804    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 14:01:35.411815    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:01:35.416863    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:01:35.416871    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:01:35.456850    4162 logs.go:123] Gathering logs for kube-apiserver [2b9f9cfef78a] ...
	I0311 14:01:35.456861    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9f9cfef78a"
	I0311 14:01:35.472343    4162 logs.go:123] Gathering logs for etcd [9d1c2cec57bc] ...
	I0311 14:01:35.472353    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1c2cec57bc"
	I0311 14:01:35.486275    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 14:01:35.486286    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:01:35.522671    4162 logs.go:123] Gathering logs for coredns [d3c8d3b0c7f1] ...
	I0311 14:01:35.522684    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c8d3b0c7f1"
	I0311 14:01:35.535944    4162 logs.go:123] Gathering logs for storage-provisioner [05067d4bae22] ...
	I0311 14:01:35.535955    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05067d4bae22"
	I0311 14:01:35.551577    4162 logs.go:123] Gathering logs for coredns [a924962f30f8] ...
	I0311 14:01:35.551588    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a924962f30f8"
	I0311 14:01:35.563470    4162 logs.go:123] Gathering logs for kube-scheduler [98ecad162532] ...
	I0311 14:01:35.563479    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98ecad162532"
	I0311 14:01:38.085390    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:01:43.087777    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:01:43.088093    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:01:43.119102    4162 logs.go:276] 1 containers: [2b9f9cfef78a]
	I0311 14:01:43.119229    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:01:43.139026    4162 logs.go:276] 1 containers: [9d1c2cec57bc]
	I0311 14:01:43.139121    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:01:43.153805    4162 logs.go:276] 4 containers: [a924962f30f8 41f9ce40851e d3c8d3b0c7f1 b9719d23f2f1]
	I0311 14:01:43.153882    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:01:43.167278    4162 logs.go:276] 1 containers: [98ecad162532]
	I0311 14:01:43.167350    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:01:43.178294    4162 logs.go:276] 1 containers: [5832f82ba133]
	I0311 14:01:43.178360    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:01:43.188606    4162 logs.go:276] 1 containers: [97a2f11b555a]
	I0311 14:01:43.188676    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:01:43.199027    4162 logs.go:276] 0 containers: []
	W0311 14:01:43.199039    4162 logs.go:278] No container was found matching "kindnet"
	I0311 14:01:43.199091    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:01:43.209634    4162 logs.go:276] 1 containers: [05067d4bae22]
	I0311 14:01:43.209652    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 14:01:43.209658    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:01:43.214348    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:01:43.214357    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:01:43.250722    4162 logs.go:123] Gathering logs for kube-proxy [5832f82ba133] ...
	I0311 14:01:43.250736    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5832f82ba133"
	I0311 14:01:43.262993    4162 logs.go:123] Gathering logs for Docker ...
	I0311 14:01:43.263005    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:01:43.286139    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 14:01:43.286147    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:01:43.321485    4162 logs.go:123] Gathering logs for kube-apiserver [2b9f9cfef78a] ...
	I0311 14:01:43.321494    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9f9cfef78a"
	I0311 14:01:43.336039    4162 logs.go:123] Gathering logs for coredns [41f9ce40851e] ...
	I0311 14:01:43.336051    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41f9ce40851e"
	I0311 14:01:43.347656    4162 logs.go:123] Gathering logs for kube-scheduler [98ecad162532] ...
	I0311 14:01:43.347666    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98ecad162532"
	I0311 14:01:43.361649    4162 logs.go:123] Gathering logs for etcd [9d1c2cec57bc] ...
	I0311 14:01:43.361660    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1c2cec57bc"
	I0311 14:01:43.378521    4162 logs.go:123] Gathering logs for kube-controller-manager [97a2f11b555a] ...
	I0311 14:01:43.378532    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97a2f11b555a"
	I0311 14:01:43.395886    4162 logs.go:123] Gathering logs for container status ...
	I0311 14:01:43.395896    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:01:43.407997    4162 logs.go:123] Gathering logs for coredns [a924962f30f8] ...
	I0311 14:01:43.408010    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a924962f30f8"
	I0311 14:01:43.419159    4162 logs.go:123] Gathering logs for coredns [d3c8d3b0c7f1] ...
	I0311 14:01:43.419168    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c8d3b0c7f1"
	I0311 14:01:43.430701    4162 logs.go:123] Gathering logs for coredns [b9719d23f2f1] ...
	I0311 14:01:43.430710    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9719d23f2f1"
	I0311 14:01:43.442687    4162 logs.go:123] Gathering logs for storage-provisioner [05067d4bae22] ...
	I0311 14:01:43.442698    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05067d4bae22"
	I0311 14:01:45.958975    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:01:50.961209    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:01:50.961360    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:01:50.976722    4162 logs.go:276] 1 containers: [2b9f9cfef78a]
	I0311 14:01:50.976802    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:01:50.989323    4162 logs.go:276] 1 containers: [9d1c2cec57bc]
	I0311 14:01:50.989399    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:01:51.000009    4162 logs.go:276] 4 containers: [a924962f30f8 41f9ce40851e d3c8d3b0c7f1 b9719d23f2f1]
	I0311 14:01:51.000080    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:01:51.010224    4162 logs.go:276] 1 containers: [98ecad162532]
	I0311 14:01:51.010294    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:01:51.020533    4162 logs.go:276] 1 containers: [5832f82ba133]
	I0311 14:01:51.020600    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:01:51.031373    4162 logs.go:276] 1 containers: [97a2f11b555a]
	I0311 14:01:51.031440    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:01:51.041002    4162 logs.go:276] 0 containers: []
	W0311 14:01:51.041011    4162 logs.go:278] No container was found matching "kindnet"
	I0311 14:01:51.041059    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:01:51.051236    4162 logs.go:276] 1 containers: [05067d4bae22]
	I0311 14:01:51.051253    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:01:51.051259    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:01:51.087228    4162 logs.go:123] Gathering logs for kube-controller-manager [97a2f11b555a] ...
	I0311 14:01:51.087240    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97a2f11b555a"
	I0311 14:01:51.104613    4162 logs.go:123] Gathering logs for kube-scheduler [98ecad162532] ...
	I0311 14:01:51.104625    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98ecad162532"
	I0311 14:01:51.119165    4162 logs.go:123] Gathering logs for container status ...
	I0311 14:01:51.119177    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:01:51.130430    4162 logs.go:123] Gathering logs for etcd [9d1c2cec57bc] ...
	I0311 14:01:51.130444    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1c2cec57bc"
	I0311 14:01:51.144919    4162 logs.go:123] Gathering logs for coredns [d3c8d3b0c7f1] ...
	I0311 14:01:51.144931    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c8d3b0c7f1"
	I0311 14:01:51.163216    4162 logs.go:123] Gathering logs for kube-apiserver [2b9f9cfef78a] ...
	I0311 14:01:51.163228    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9f9cfef78a"
	I0311 14:01:51.177868    4162 logs.go:123] Gathering logs for coredns [41f9ce40851e] ...
	I0311 14:01:51.177879    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41f9ce40851e"
	I0311 14:01:51.189523    4162 logs.go:123] Gathering logs for coredns [b9719d23f2f1] ...
	I0311 14:01:51.189534    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9719d23f2f1"
	I0311 14:01:51.207146    4162 logs.go:123] Gathering logs for kube-proxy [5832f82ba133] ...
	I0311 14:01:51.207157    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5832f82ba133"
	I0311 14:01:51.218712    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 14:01:51.218724    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:01:51.253714    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 14:01:51.253721    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:01:51.258180    4162 logs.go:123] Gathering logs for Docker ...
	I0311 14:01:51.258187    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:01:51.282484    4162 logs.go:123] Gathering logs for coredns [a924962f30f8] ...
	I0311 14:01:51.282493    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a924962f30f8"
	I0311 14:01:51.294264    4162 logs.go:123] Gathering logs for storage-provisioner [05067d4bae22] ...
	I0311 14:01:51.294274    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05067d4bae22"
	I0311 14:01:53.816968    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:01:58.819099    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:01:58.819242    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:01:58.830351    4162 logs.go:276] 1 containers: [2b9f9cfef78a]
	I0311 14:01:58.830419    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:01:58.849103    4162 logs.go:276] 1 containers: [9d1c2cec57bc]
	I0311 14:01:58.849188    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:01:58.861060    4162 logs.go:276] 4 containers: [a924962f30f8 41f9ce40851e d3c8d3b0c7f1 b9719d23f2f1]
	I0311 14:01:58.861154    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:01:58.872983    4162 logs.go:276] 1 containers: [98ecad162532]
	I0311 14:01:58.873048    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:01:58.883424    4162 logs.go:276] 1 containers: [5832f82ba133]
	I0311 14:01:58.883492    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:01:58.894190    4162 logs.go:276] 1 containers: [97a2f11b555a]
	I0311 14:01:58.894255    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:01:58.906588    4162 logs.go:276] 0 containers: []
	W0311 14:01:58.906657    4162 logs.go:278] No container was found matching "kindnet"
	I0311 14:01:58.906722    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:01:58.919240    4162 logs.go:276] 1 containers: [05067d4bae22]
	I0311 14:01:58.919262    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 14:01:58.919268    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:01:58.955601    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 14:01:58.955616    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:01:58.960673    4162 logs.go:123] Gathering logs for coredns [d3c8d3b0c7f1] ...
	I0311 14:01:58.960683    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c8d3b0c7f1"
	I0311 14:01:58.973757    4162 logs.go:123] Gathering logs for coredns [b9719d23f2f1] ...
	I0311 14:01:58.973767    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9719d23f2f1"
	I0311 14:01:58.986592    4162 logs.go:123] Gathering logs for Docker ...
	I0311 14:01:58.986605    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:01:59.010454    4162 logs.go:123] Gathering logs for container status ...
	I0311 14:01:59.010462    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:01:59.021964    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:01:59.021974    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:01:59.063740    4162 logs.go:123] Gathering logs for etcd [9d1c2cec57bc] ...
	I0311 14:01:59.063751    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1c2cec57bc"
	I0311 14:01:59.077985    4162 logs.go:123] Gathering logs for kube-scheduler [98ecad162532] ...
	I0311 14:01:59.077996    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98ecad162532"
	I0311 14:01:59.094032    4162 logs.go:123] Gathering logs for kube-apiserver [2b9f9cfef78a] ...
	I0311 14:01:59.094043    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9f9cfef78a"
	I0311 14:01:59.108779    4162 logs.go:123] Gathering logs for coredns [a924962f30f8] ...
	I0311 14:01:59.108788    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a924962f30f8"
	I0311 14:01:59.120362    4162 logs.go:123] Gathering logs for kube-proxy [5832f82ba133] ...
	I0311 14:01:59.120374    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5832f82ba133"
	I0311 14:01:59.132151    4162 logs.go:123] Gathering logs for kube-controller-manager [97a2f11b555a] ...
	I0311 14:01:59.132162    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97a2f11b555a"
	I0311 14:01:59.148926    4162 logs.go:123] Gathering logs for storage-provisioner [05067d4bae22] ...
	I0311 14:01:59.148936    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05067d4bae22"
	I0311 14:01:59.160591    4162 logs.go:123] Gathering logs for coredns [41f9ce40851e] ...
	I0311 14:01:59.160602    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41f9ce40851e"
	I0311 14:02:01.673981    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:02:06.676137    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:02:06.676404    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:02:06.707198    4162 logs.go:276] 1 containers: [2b9f9cfef78a]
	I0311 14:02:06.707316    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:02:06.723245    4162 logs.go:276] 1 containers: [9d1c2cec57bc]
	I0311 14:02:06.723326    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:02:06.736771    4162 logs.go:276] 4 containers: [a924962f30f8 41f9ce40851e d3c8d3b0c7f1 b9719d23f2f1]
	I0311 14:02:06.736845    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:02:06.748262    4162 logs.go:276] 1 containers: [98ecad162532]
	I0311 14:02:06.748323    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:02:06.758484    4162 logs.go:276] 1 containers: [5832f82ba133]
	I0311 14:02:06.758551    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:02:06.769078    4162 logs.go:276] 1 containers: [97a2f11b555a]
	I0311 14:02:06.769147    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:02:06.778873    4162 logs.go:276] 0 containers: []
	W0311 14:02:06.778887    4162 logs.go:278] No container was found matching "kindnet"
	I0311 14:02:06.778951    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:02:06.788914    4162 logs.go:276] 1 containers: [05067d4bae22]
	I0311 14:02:06.788936    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 14:02:06.788942    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:02:06.793493    4162 logs.go:123] Gathering logs for etcd [9d1c2cec57bc] ...
	I0311 14:02:06.793499    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1c2cec57bc"
	I0311 14:02:06.810102    4162 logs.go:123] Gathering logs for coredns [a924962f30f8] ...
	I0311 14:02:06.810113    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a924962f30f8"
	I0311 14:02:06.827351    4162 logs.go:123] Gathering logs for kube-proxy [5832f82ba133] ...
	I0311 14:02:06.827362    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5832f82ba133"
	I0311 14:02:06.843130    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:02:06.843142    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:02:06.878749    4162 logs.go:123] Gathering logs for coredns [41f9ce40851e] ...
	I0311 14:02:06.878759    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41f9ce40851e"
	I0311 14:02:06.899998    4162 logs.go:123] Gathering logs for coredns [d3c8d3b0c7f1] ...
	I0311 14:02:06.900009    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c8d3b0c7f1"
	I0311 14:02:06.912359    4162 logs.go:123] Gathering logs for coredns [b9719d23f2f1] ...
	I0311 14:02:06.912370    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9719d23f2f1"
	I0311 14:02:06.924023    4162 logs.go:123] Gathering logs for Docker ...
	I0311 14:02:06.924036    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:02:06.948483    4162 logs.go:123] Gathering logs for container status ...
	I0311 14:02:06.948493    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:02:06.961834    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 14:02:06.961845    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:02:06.997440    4162 logs.go:123] Gathering logs for kube-scheduler [98ecad162532] ...
	I0311 14:02:06.997450    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98ecad162532"
	I0311 14:02:07.011859    4162 logs.go:123] Gathering logs for kube-controller-manager [97a2f11b555a] ...
	I0311 14:02:07.011869    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97a2f11b555a"
	I0311 14:02:07.028621    4162 logs.go:123] Gathering logs for kube-apiserver [2b9f9cfef78a] ...
	I0311 14:02:07.028632    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9f9cfef78a"
	I0311 14:02:07.042960    4162 logs.go:123] Gathering logs for storage-provisioner [05067d4bae22] ...
	I0311 14:02:07.042970    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05067d4bae22"
	I0311 14:02:09.555864    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:02:14.557508    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:02:14.557591    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:02:14.578160    4162 logs.go:276] 1 containers: [2b9f9cfef78a]
	I0311 14:02:14.578237    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:02:14.595699    4162 logs.go:276] 1 containers: [9d1c2cec57bc]
	I0311 14:02:14.595788    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:02:14.616222    4162 logs.go:276] 4 containers: [a924962f30f8 41f9ce40851e d3c8d3b0c7f1 b9719d23f2f1]
	I0311 14:02:14.616359    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:02:14.637431    4162 logs.go:276] 1 containers: [98ecad162532]
	I0311 14:02:14.637508    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:02:14.655188    4162 logs.go:276] 1 containers: [5832f82ba133]
	I0311 14:02:14.655294    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:02:14.672638    4162 logs.go:276] 1 containers: [97a2f11b555a]
	I0311 14:02:14.672722    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:02:14.691300    4162 logs.go:276] 0 containers: []
	W0311 14:02:14.691316    4162 logs.go:278] No container was found matching "kindnet"
	I0311 14:02:14.691411    4162 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:02:14.708602    4162 logs.go:276] 1 containers: [05067d4bae22]
	I0311 14:02:14.708623    4162 logs.go:123] Gathering logs for dmesg ...
	I0311 14:02:14.708628    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:02:14.713668    4162 logs.go:123] Gathering logs for etcd [9d1c2cec57bc] ...
	I0311 14:02:14.713678    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d1c2cec57bc"
	I0311 14:02:14.728469    4162 logs.go:123] Gathering logs for coredns [a924962f30f8] ...
	I0311 14:02:14.728478    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a924962f30f8"
	I0311 14:02:14.741006    4162 logs.go:123] Gathering logs for coredns [d3c8d3b0c7f1] ...
	I0311 14:02:14.741017    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3c8d3b0c7f1"
	I0311 14:02:14.753300    4162 logs.go:123] Gathering logs for kube-controller-manager [97a2f11b555a] ...
	I0311 14:02:14.753311    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97a2f11b555a"
	I0311 14:02:14.771804    4162 logs.go:123] Gathering logs for Docker ...
	I0311 14:02:14.771821    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:02:14.796059    4162 logs.go:123] Gathering logs for kubelet ...
	I0311 14:02:14.796073    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:02:14.833691    4162 logs.go:123] Gathering logs for kube-apiserver [2b9f9cfef78a] ...
	I0311 14:02:14.833712    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9f9cfef78a"
	I0311 14:02:14.849450    4162 logs.go:123] Gathering logs for coredns [b9719d23f2f1] ...
	I0311 14:02:14.849461    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9719d23f2f1"
	I0311 14:02:14.862358    4162 logs.go:123] Gathering logs for kube-scheduler [98ecad162532] ...
	I0311 14:02:14.862368    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 98ecad162532"
	I0311 14:02:14.877416    4162 logs.go:123] Gathering logs for container status ...
	I0311 14:02:14.877428    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:02:14.889974    4162 logs.go:123] Gathering logs for coredns [41f9ce40851e] ...
	I0311 14:02:14.889984    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41f9ce40851e"
	I0311 14:02:14.901958    4162 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:02:14.901967    4162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:02:14.937822    4162 logs.go:123] Gathering logs for kube-proxy [5832f82ba133] ...
	I0311 14:02:14.937834    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5832f82ba133"
	I0311 14:02:14.950496    4162 logs.go:123] Gathering logs for storage-provisioner [05067d4bae22] ...
	I0311 14:02:14.950507    4162 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05067d4bae22"
	I0311 14:02:17.469237    4162 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:02:22.471379    4162 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:02:22.476801    4162 out.go:177] 
	W0311 14:02:22.481679    4162 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0311 14:02:22.481685    4162 out.go:239] * 
	* 
	W0311 14:02:22.482212    4162 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:02:22.490651    4162 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-168000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-03-11 14:02:22.57755 -0700 PDT m=+3165.920939918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-168000 -n running-upgrade-168000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-168000 -n running-upgrade-168000: exit status 2 (15.767233375s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-168000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-425000 sudo cat                            | cilium-425000             | jenkins | v1.32.0 | 11 Mar 24 13:51 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-425000 sudo cat                            | cilium-425000             | jenkins | v1.32.0 | 11 Mar 24 13:51 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-425000 sudo                                | cilium-425000             | jenkins | v1.32.0 | 11 Mar 24 13:51 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-425000 sudo                                | cilium-425000             | jenkins | v1.32.0 | 11 Mar 24 13:51 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-425000 sudo                                | cilium-425000             | jenkins | v1.32.0 | 11 Mar 24 13:51 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-425000 sudo cat                            | cilium-425000             | jenkins | v1.32.0 | 11 Mar 24 13:51 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-425000 sudo cat                            | cilium-425000             | jenkins | v1.32.0 | 11 Mar 24 13:51 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-425000 sudo                                | cilium-425000             | jenkins | v1.32.0 | 11 Mar 24 13:51 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-425000 sudo                                | cilium-425000             | jenkins | v1.32.0 | 11 Mar 24 13:51 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-425000 sudo                                | cilium-425000             | jenkins | v1.32.0 | 11 Mar 24 13:51 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-425000 sudo find                           | cilium-425000             | jenkins | v1.32.0 | 11 Mar 24 13:51 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-425000 sudo crio                           | cilium-425000             | jenkins | v1.32.0 | 11 Mar 24 13:51 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-425000                                     | cilium-425000             | jenkins | v1.32.0 | 11 Mar 24 13:51 PDT | 11 Mar 24 13:51 PDT |
	| start   | -p kubernetes-upgrade-646000                         | kubernetes-upgrade-646000 | jenkins | v1.32.0 | 11 Mar 24 13:51 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-485000                             | offline-docker-485000     | jenkins | v1.32.0 | 11 Mar 24 13:51 PDT | 11 Mar 24 13:51 PDT |
	| stop    | -p kubernetes-upgrade-646000                         | kubernetes-upgrade-646000 | jenkins | v1.32.0 | 11 Mar 24 13:51 PDT | 11 Mar 24 13:51 PDT |
	| start   | -p kubernetes-upgrade-646000                         | kubernetes-upgrade-646000 | jenkins | v1.32.0 | 11 Mar 24 13:51 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                    |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-517000                            | minikube                  | jenkins | v1.26.0 | 11 Mar 24 13:51 PDT | 11 Mar 24 13:53 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-646000                         | kubernetes-upgrade-646000 | jenkins | v1.32.0 | 11 Mar 24 13:51 PDT | 11 Mar 24 13:51 PDT |
	| start   | -p running-upgrade-168000                            | minikube                  | jenkins | v1.26.0 | 11 Mar 24 13:51 PDT | 11 Mar 24 13:53 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-517000 stop                          | minikube                  | jenkins | v1.26.0 | 11 Mar 24 13:53 PDT | 11 Mar 24 13:53 PDT |
	| start   | -p stopped-upgrade-517000                            | stopped-upgrade-517000    | jenkins | v1.32.0 | 11 Mar 24 13:53 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-168000                            | running-upgrade-168000    | jenkins | v1.32.0 | 11 Mar 24 13:53 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-517000                            | stopped-upgrade-517000    | jenkins | v1.32.0 | 11 Mar 24 14:02 PDT | 11 Mar 24 14:02 PDT |
	| start   | -p pause-044000 --memory=2048                        | pause-044000              | jenkins | v1.32.0 | 11 Mar 24 14:02 PDT |                     |
	|         | --install-addons=false                               |                           |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                            |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 14:02:33
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 14:02:33.398910    4391 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:02:33.399048    4391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:02:33.399050    4391 out.go:304] Setting ErrFile to fd 2...
	I0311 14:02:33.399051    4391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:02:33.399172    4391 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:02:33.400192    4391 out.go:298] Setting JSON to false
	I0311 14:02:33.420076    4391 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3724,"bootTime":1710187229,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 14:02:33.420146    4391 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 14:02:33.424901    4391 out.go:177] * [pause-044000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 14:02:33.433697    4391 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 14:02:33.438788    4391 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 14:02:33.433729    4391 notify.go:220] Checking for updates...
	I0311 14:02:33.444661    4391 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 14:02:33.447716    4391 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 14:02:33.450692    4391 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 14:02:33.453696    4391 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 14:02:33.457042    4391 config.go:182] Loaded profile config "multinode-457000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:02:33.457103    4391 config.go:182] Loaded profile config "running-upgrade-168000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0311 14:02:33.457149    4391 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 14:02:33.460574    4391 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 14:02:33.467705    4391 start.go:297] selected driver: qemu2
	I0311 14:02:33.467709    4391 start.go:901] validating driver "qemu2" against <nil>
	I0311 14:02:33.467716    4391 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 14:02:33.470286    4391 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 14:02:33.473711    4391 out.go:177] * Automatically selected the socket_vmnet network
	I0311 14:02:33.476750    4391 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 14:02:33.476764    4391 cni.go:84] Creating CNI manager for ""
	I0311 14:02:33.476772    4391 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 14:02:33.476775    4391 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 14:02:33.476799    4391 start.go:340] cluster config:
	{Name:pause-044000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-044000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_cli
ent SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:02:33.481688    4391 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:02:33.488659    4391 out.go:177] * Starting "pause-044000" primary control-plane node in "pause-044000" cluster
	I0311 14:02:33.492700    4391 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 14:02:33.492714    4391 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 14:02:33.492723    4391 cache.go:56] Caching tarball of preloaded images
	I0311 14:02:33.492784    4391 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 14:02:33.492787    4391 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 14:02:33.492857    4391 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/pause-044000/config.json ...
	I0311 14:02:33.492870    4391 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/pause-044000/config.json: {Name:mkb94d7583f5a22a03c3c1668fed5ebd06cfd605 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 14:02:33.493213    4391 start.go:360] acquireMachinesLock for pause-044000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:02:33.493246    4391 start.go:364] duration metric: took 27.375µs to acquireMachinesLock for "pause-044000"
	I0311 14:02:33.493255    4391 start.go:93] Provisioning new machine with config: &{Name:pause-044000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:pause-044000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:02:33.493285    4391 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:02:33.501706    4391 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0311 14:02:33.527897    4391 start.go:159] libmachine.API.Create for "pause-044000" (driver="qemu2")
	I0311 14:02:33.527930    4391 client.go:168] LocalClient.Create starting
	I0311 14:02:33.528005    4391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:02:33.528034    4391 main.go:141] libmachine: Decoding PEM data...
	I0311 14:02:33.528040    4391 main.go:141] libmachine: Parsing certificate...
	I0311 14:02:33.528087    4391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:02:33.528106    4391 main.go:141] libmachine: Decoding PEM data...
	I0311 14:02:33.528113    4391 main.go:141] libmachine: Parsing certificate...
	I0311 14:02:33.528446    4391 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:02:33.917220    4391 main.go:141] libmachine: Creating SSH key...
	I0311 14:02:33.985590    4391 main.go:141] libmachine: Creating Disk image...
	I0311 14:02:33.985595    4391 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:02:33.985759    4391 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/pause-044000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/pause-044000/disk.qcow2
	I0311 14:02:33.998745    4391 main.go:141] libmachine: STDOUT: 
	I0311 14:02:33.998762    4391 main.go:141] libmachine: STDERR: 
	I0311 14:02:33.998825    4391 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/pause-044000/disk.qcow2 +20000M
	I0311 14:02:34.009784    4391 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:02:34.009797    4391 main.go:141] libmachine: STDERR: 
	I0311 14:02:34.009817    4391 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/pause-044000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/pause-044000/disk.qcow2
	I0311 14:02:34.009821    4391 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:02:34.009849    4391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/pause-044000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/pause-044000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/pause-044000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:1c:43:34:ba:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/pause-044000/disk.qcow2
	I0311 14:02:34.012714    4391 main.go:141] libmachine: STDOUT: 
	I0311 14:02:34.012728    4391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:02:34.012748    4391 client.go:171] duration metric: took 484.829125ms to LocalClient.Create
	I0311 14:02:36.014891    4391 start.go:128] duration metric: took 2.521660209s to createHost
	I0311 14:02:36.014962    4391 start.go:83] releasing machines lock for "pause-044000", held for 2.521789667s
	W0311 14:02:36.015035    4391 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:02:36.029327    4391 out.go:177] * Deleting "pause-044000" in qemu2 ...
	W0311 14:02:36.060484    4391 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:02:36.060519    4391 start.go:728] Will try again in 5 seconds ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-03-11 20:52:52 UTC, ends at Mon 2024-03-11 21:02:38 UTC. --
	Mar 11 21:02:22 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:22Z" level=error msg="ContainerStats resp: {0x40007dc0c0 linux}"
	Mar 11 21:02:22 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:22Z" level=error msg="ContainerStats resp: {0x40009dff40 linux}"
	Mar 11 21:02:22 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:22Z" level=error msg="ContainerStats resp: {0x400041fa00 linux}"
	Mar 11 21:02:22 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:22Z" level=error msg="ContainerStats resp: {0x40007d6b40 linux}"
	Mar 11 21:02:23 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:23Z" level=error msg="ContainerStats resp: {0x400082f880 linux}"
	Mar 11 21:02:24 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:24Z" level=error msg="ContainerStats resp: {0x40007dc8c0 linux}"
	Mar 11 21:02:24 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:24Z" level=error msg="ContainerStats resp: {0x40007dca00 linux}"
	Mar 11 21:02:24 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:24Z" level=error msg="ContainerStats resp: {0x40007dcec0 linux}"
	Mar 11 21:02:24 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:24Z" level=error msg="ContainerStats resp: {0x40009a3640 linux}"
	Mar 11 21:02:24 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:24Z" level=error msg="ContainerStats resp: {0x40009a3800 linux}"
	Mar 11 21:02:24 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:24Z" level=error msg="ContainerStats resp: {0x40009a3f80 linux}"
	Mar 11 21:02:24 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:24Z" level=error msg="ContainerStats resp: {0x4000a2a580 linux}"
	Mar 11 21:02:25 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:25Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 11 21:02:30 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:30Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 11 21:02:35 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:35Z" level=error msg="ContainerStats resp: {0x40008ffa00 linux}"
	Mar 11 21:02:35 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:35Z" level=error msg="ContainerStats resp: {0x4000832100 linux}"
	Mar 11 21:02:35 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:35Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 11 21:02:36 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:36Z" level=error msg="ContainerStats resp: {0x40007d7a00 linux}"
	Mar 11 21:02:37 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:37Z" level=error msg="ContainerStats resp: {0x4000359a80 linux}"
	Mar 11 21:02:37 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:37Z" level=error msg="ContainerStats resp: {0x40007dcc80 linux}"
	Mar 11 21:02:37 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:37Z" level=error msg="ContainerStats resp: {0x40007dd100 linux}"
	Mar 11 21:02:37 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:37Z" level=error msg="ContainerStats resp: {0x4000721380 linux}"
	Mar 11 21:02:37 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:37Z" level=error msg="ContainerStats resp: {0x4000721b00 linux}"
	Mar 11 21:02:37 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:37Z" level=error msg="ContainerStats resp: {0x4000721c40 linux}"
	Mar 11 21:02:37 running-upgrade-168000 cri-dockerd[4374]: time="2024-03-11T21:02:37Z" level=error msg="ContainerStats resp: {0x40007ddd80 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	57ebe23ef9d57       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   a085c05b10a69
	24962eb55dbd4       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   ecc5dab84d5f8
	a924962f30f8d       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   a085c05b10a69
	41f9ce40851ec       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   ecc5dab84d5f8
	05067d4bae222       ba04bb24b9575       4 minutes ago       Running             storage-provisioner       0                   a8f71fc6ec768
	5832f82ba1330       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   4ade06cf7258f
	97a2f11b555a3       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   7256221d74cb6
	9d1c2cec57bc3       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   df7af682bbbee
	98ecad162532d       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   3fbcc3f2571ca
	2b9f9cfef78af       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   cab2f36806e81
	
	
	==> coredns [24962eb55dbd] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7629286498922295127.602446577300484141. HINFO: read udp 10.244.0.2:34635->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7629286498922295127.602446577300484141. HINFO: read udp 10.244.0.2:46989->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7629286498922295127.602446577300484141. HINFO: read udp 10.244.0.2:35345->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7629286498922295127.602446577300484141. HINFO: read udp 10.244.0.2:60099->10.0.2.3:53: i/o timeout
	
	
	==> coredns [41f9ce40851e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 544039204133127750.1171935436356731501. HINFO: read udp 10.244.0.2:40409->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 544039204133127750.1171935436356731501. HINFO: read udp 10.244.0.2:54280->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 544039204133127750.1171935436356731501. HINFO: read udp 10.244.0.2:48122->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 544039204133127750.1171935436356731501. HINFO: read udp 10.244.0.2:51159->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 544039204133127750.1171935436356731501. HINFO: read udp 10.244.0.2:46819->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 544039204133127750.1171935436356731501. HINFO: read udp 10.244.0.2:45883->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 544039204133127750.1171935436356731501. HINFO: read udp 10.244.0.2:50376->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 544039204133127750.1171935436356731501. HINFO: read udp 10.244.0.2:50769->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 544039204133127750.1171935436356731501. HINFO: read udp 10.244.0.2:36947->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 544039204133127750.1171935436356731501. HINFO: read udp 10.244.0.2:55745->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [57ebe23ef9d5] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 9082409659384466810.986892218319898746. HINFO: read udp 10.244.0.3:58563->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9082409659384466810.986892218319898746. HINFO: read udp 10.244.0.3:38795->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9082409659384466810.986892218319898746. HINFO: read udp 10.244.0.3:35171->10.0.2.3:53: i/o timeout
	
	
	==> coredns [a924962f30f8] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 956347949444713378.1348650422552279486. HINFO: read udp 10.244.0.3:57508->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 956347949444713378.1348650422552279486. HINFO: read udp 10.244.0.3:59239->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 956347949444713378.1348650422552279486. HINFO: read udp 10.244.0.3:50077->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 956347949444713378.1348650422552279486. HINFO: read udp 10.244.0.3:33639->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 956347949444713378.1348650422552279486. HINFO: read udp 10.244.0.3:47376->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 956347949444713378.1348650422552279486. HINFO: read udp 10.244.0.3:32907->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 956347949444713378.1348650422552279486. HINFO: read udp 10.244.0.3:60873->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 956347949444713378.1348650422552279486. HINFO: read udp 10.244.0.3:39560->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 956347949444713378.1348650422552279486. HINFO: read udp 10.244.0.3:38567->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 956347949444713378.1348650422552279486. HINFO: read udp 10.244.0.3:60117->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-168000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-168000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=running-upgrade-168000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T13_58_21_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 20:58:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-168000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 21:02:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 20:58:21 +0000   Mon, 11 Mar 2024 20:58:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 20:58:21 +0000   Mon, 11 Mar 2024 20:58:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 20:58:21 +0000   Mon, 11 Mar 2024 20:58:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 20:58:21 +0000   Mon, 11 Mar 2024 20:58:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-168000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 15366b26e46d44069617cc4cfa9f1db5
	  System UUID:                15366b26e46d44069617cc4cfa9f1db5
	  Boot ID:                    375a00f3-b3d3-4b87-8335-094a6f7f5792
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-8rjs4                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-lfv59                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-168000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-apiserver-running-upgrade-168000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-168000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-6tzf5                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-168000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-168000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-168000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-168000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-168000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-168000 event: Registered Node running-upgrade-168000 in Controller
	
	
	==> dmesg <==
	[  +0.071030] systemd-fstab-generator[900]: Ignoring "noauto" for root device
	[  +1.135686] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.088941] systemd-fstab-generator[1051]: Ignoring "noauto" for root device
	[  +0.078827] systemd-fstab-generator[1062]: Ignoring "noauto" for root device
	[  +2.509752] systemd-fstab-generator[1291]: Ignoring "noauto" for root device
	[ +14.131918] systemd-fstab-generator[1956]: Ignoring "noauto" for root device
	[ +11.279279] kauditd_printk_skb: 47 callbacks suppressed
	[  +2.991844] systemd-fstab-generator[2714]: Ignoring "noauto" for root device
	[  +0.161218] systemd-fstab-generator[2749]: Ignoring "noauto" for root device
	[  +0.110545] systemd-fstab-generator[2763]: Ignoring "noauto" for root device
	[  +0.108800] systemd-fstab-generator[2778]: Ignoring "noauto" for root device
	[  +5.163662] kauditd_printk_skb: 14 callbacks suppressed
	[ +11.535510] systemd-fstab-generator[4329]: Ignoring "noauto" for root device
	[  +0.096016] systemd-fstab-generator[4342]: Ignoring "noauto" for root device
	[  +0.085393] systemd-fstab-generator[4353]: Ignoring "noauto" for root device
	[  +0.104451] systemd-fstab-generator[4367]: Ignoring "noauto" for root device
	[  +2.202273] systemd-fstab-generator[4521]: Ignoring "noauto" for root device
	[  +2.930718] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.421093] systemd-fstab-generator[5343]: Ignoring "noauto" for root device
	[Mar11 20:54] kauditd_printk_skb: 9 callbacks suppressed
	[  +0.609677] systemd-fstab-generator[6724]: Ignoring "noauto" for root device
	[ +18.455893] kauditd_printk_skb: 1 callbacks suppressed
	[Mar11 20:58] systemd-fstab-generator[15381]: Ignoring "noauto" for root device
	[  +5.630156] systemd-fstab-generator[15983]: Ignoring "noauto" for root device
	[  +0.466487] systemd-fstab-generator[16115]: Ignoring "noauto" for root device
	
	
	==> etcd [9d1c2cec57bc] <==
	{"level":"info","ts":"2024-03-11T20:58:17.049Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-03-11T20:58:17.050Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-11T20:58:17.050Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-11T20:58:17.050Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-03-11T20:58:17.050Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-03-11T20:58:17.050Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-11T20:58:17.050Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-11T20:58:17.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-11T20:58:17.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-11T20:58:17.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-03-11T20:58:17.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-03-11T20:58:17.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-11T20:58:17.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-03-11T20:58:17.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-11T20:58:17.342Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T20:58:17.346Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T20:58:17.346Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T20:58:17.346Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T20:58:17.346Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-168000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-11T20:58:17.346Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T20:58:17.347Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-11T20:58:17.350Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T20:58:17.350Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-03-11T20:58:17.358Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-11T20:58:17.358Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 21:02:38 up 9 min,  0 users,  load average: 0.25, 0.43, 0.30
	Linux running-upgrade-168000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [2b9f9cfef78a] <==
	I0311 20:58:18.779727       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0311 20:58:18.780754       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0311 20:58:18.780782       1 cache.go:39] Caches are synced for autoregister controller
	I0311 20:58:18.780793       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0311 20:58:18.780978       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0311 20:58:18.790371       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0311 20:58:18.819769       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0311 20:58:19.503070       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0311 20:58:19.686049       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0311 20:58:19.689296       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0311 20:58:19.689319       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0311 20:58:19.823923       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0311 20:58:19.833138       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0311 20:58:19.861232       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0311 20:58:19.863205       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0311 20:58:19.863580       1 controller.go:611] quota admission added evaluator for: endpoints
	I0311 20:58:19.864847       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0311 20:58:20.812331       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0311 20:58:21.558175       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0311 20:58:21.561662       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0311 20:58:21.566271       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0311 20:58:21.610014       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0311 20:58:34.165220       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0311 20:58:34.263851       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0311 20:58:35.386335       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [97a2f11b555a] <==
	I0311 20:58:33.635170       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0311 20:58:33.636056       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0311 20:58:33.636110       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0311 20:58:33.637586       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0311 20:58:33.640679       1 shared_informer.go:262] Caches are synced for crt configmap
	I0311 20:58:33.663259       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0311 20:58:33.663308       1 shared_informer.go:262] Caches are synced for cronjob
	I0311 20:58:33.663342       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0311 20:58:33.663310       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0311 20:58:33.665752       1 shared_informer.go:262] Caches are synced for endpoint
	I0311 20:58:33.712694       1 shared_informer.go:262] Caches are synced for disruption
	I0311 20:58:33.712702       1 disruption.go:371] Sending events to api server.
	I0311 20:58:33.762749       1 shared_informer.go:262] Caches are synced for stateful set
	I0311 20:58:33.813784       1 shared_informer.go:262] Caches are synced for attach detach
	I0311 20:58:33.822939       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0311 20:58:33.824074       1 shared_informer.go:262] Caches are synced for resource quota
	I0311 20:58:33.862938       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0311 20:58:33.865120       1 shared_informer.go:262] Caches are synced for resource quota
	I0311 20:58:34.168371       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6tzf5"
	I0311 20:58:34.265231       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0311 20:58:34.281110       1 shared_informer.go:262] Caches are synced for garbage collector
	I0311 20:58:34.311913       1 shared_informer.go:262] Caches are synced for garbage collector
	I0311 20:58:34.311922       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0311 20:58:34.667362       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-8rjs4"
	I0311 20:58:34.672610       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-lfv59"
	
	
	==> kube-proxy [5832f82ba133] <==
	I0311 20:58:35.372636       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0311 20:58:35.373351       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0311 20:58:35.373374       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0311 20:58:35.382743       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0311 20:58:35.382753       1 server_others.go:206] "Using iptables Proxier"
	I0311 20:58:35.383118       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0311 20:58:35.383620       1 server.go:661] "Version info" version="v1.24.1"
	I0311 20:58:35.383631       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 20:58:35.384127       1 config.go:317] "Starting service config controller"
	I0311 20:58:35.384140       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0311 20:58:35.384149       1 config.go:226] "Starting endpoint slice config controller"
	I0311 20:58:35.384231       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0311 20:58:35.385194       1 config.go:444] "Starting node config controller"
	I0311 20:58:35.385223       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0311 20:58:35.484908       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0311 20:58:35.484940       1 shared_informer.go:262] Caches are synced for service config
	I0311 20:58:35.487292       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [98ecad162532] <==
	W0311 20:58:18.746052       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0311 20:58:18.746063       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0311 20:58:18.746408       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0311 20:58:18.746422       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0311 20:58:18.746445       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0311 20:58:18.746456       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0311 20:58:18.746472       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0311 20:58:18.746479       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0311 20:58:18.746501       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0311 20:58:18.746507       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0311 20:58:18.746547       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0311 20:58:18.746560       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0311 20:58:18.746583       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0311 20:58:18.746590       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0311 20:58:18.746657       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0311 20:58:18.746664       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0311 20:58:18.750132       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0311 20:58:18.750141       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0311 20:58:19.602338       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0311 20:58:19.602479       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0311 20:58:19.645989       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0311 20:58:19.646031       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0311 20:58:19.694994       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0311 20:58:19.695239       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0311 20:58:20.142288       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-03-11 20:52:52 UTC, ends at Mon 2024-03-11 21:02:39 UTC. --
	Mar 11 20:58:33 running-upgrade-168000 kubelet[15989]: I0311 20:58:33.670246   15989 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 11 20:58:33 running-upgrade-168000 kubelet[15989]: I0311 20:58:33.771354   15989 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg4t7\" (UniqueName: \"kubernetes.io/projected/198da296-7e5e-4998-9aeb-ac3bff8b14b3-kube-api-access-zg4t7\") pod \"storage-provisioner\" (UID: \"198da296-7e5e-4998-9aeb-ac3bff8b14b3\") " pod="kube-system/storage-provisioner"
	Mar 11 20:58:33 running-upgrade-168000 kubelet[15989]: I0311 20:58:33.771375   15989 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/198da296-7e5e-4998-9aeb-ac3bff8b14b3-tmp\") pod \"storage-provisioner\" (UID: \"198da296-7e5e-4998-9aeb-ac3bff8b14b3\") " pod="kube-system/storage-provisioner"
	Mar 11 20:58:33 running-upgrade-168000 kubelet[15989]: E0311 20:58:33.876696   15989 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Mar 11 20:58:33 running-upgrade-168000 kubelet[15989]: E0311 20:58:33.876717   15989 projected.go:192] Error preparing data for projected volume kube-api-access-zg4t7 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Mar 11 20:58:33 running-upgrade-168000 kubelet[15989]: E0311 20:58:33.876753   15989 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/198da296-7e5e-4998-9aeb-ac3bff8b14b3-kube-api-access-zg4t7 podName:198da296-7e5e-4998-9aeb-ac3bff8b14b3 nodeName:}" failed. No retries permitted until 2024-03-11 20:58:34.376738766 +0000 UTC m=+12.834752639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zg4t7" (UniqueName: "kubernetes.io/projected/198da296-7e5e-4998-9aeb-ac3bff8b14b3-kube-api-access-zg4t7") pod "storage-provisioner" (UID: "198da296-7e5e-4998-9aeb-ac3bff8b14b3") : configmap "kube-root-ca.crt" not found
	Mar 11 20:58:34 running-upgrade-168000 kubelet[15989]: I0311 20:58:34.171477   15989 topology_manager.go:200] "Topology Admit Handler"
	Mar 11 20:58:34 running-upgrade-168000 kubelet[15989]: I0311 20:58:34.276572   15989 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmvk4\" (UniqueName: \"kubernetes.io/projected/4c8166dd-f48b-4461-bacb-98f0f73294aa-kube-api-access-hmvk4\") pod \"kube-proxy-6tzf5\" (UID: \"4c8166dd-f48b-4461-bacb-98f0f73294aa\") " pod="kube-system/kube-proxy-6tzf5"
	Mar 11 20:58:34 running-upgrade-168000 kubelet[15989]: I0311 20:58:34.276694   15989 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c8166dd-f48b-4461-bacb-98f0f73294aa-xtables-lock\") pod \"kube-proxy-6tzf5\" (UID: \"4c8166dd-f48b-4461-bacb-98f0f73294aa\") " pod="kube-system/kube-proxy-6tzf5"
	Mar 11 20:58:34 running-upgrade-168000 kubelet[15989]: I0311 20:58:34.276712   15989 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c8166dd-f48b-4461-bacb-98f0f73294aa-lib-modules\") pod \"kube-proxy-6tzf5\" (UID: \"4c8166dd-f48b-4461-bacb-98f0f73294aa\") " pod="kube-system/kube-proxy-6tzf5"
	Mar 11 20:58:34 running-upgrade-168000 kubelet[15989]: I0311 20:58:34.276728   15989 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4c8166dd-f48b-4461-bacb-98f0f73294aa-kube-proxy\") pod \"kube-proxy-6tzf5\" (UID: \"4c8166dd-f48b-4461-bacb-98f0f73294aa\") " pod="kube-system/kube-proxy-6tzf5"
	Mar 11 20:58:34 running-upgrade-168000 kubelet[15989]: E0311 20:58:34.376971   15989 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Mar 11 20:58:34 running-upgrade-168000 kubelet[15989]: E0311 20:58:34.376990   15989 projected.go:192] Error preparing data for projected volume kube-api-access-zg4t7 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Mar 11 20:58:34 running-upgrade-168000 kubelet[15989]: E0311 20:58:34.377014   15989 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/198da296-7e5e-4998-9aeb-ac3bff8b14b3-kube-api-access-zg4t7 podName:198da296-7e5e-4998-9aeb-ac3bff8b14b3 nodeName:}" failed. No retries permitted until 2024-03-11 20:58:35.377004542 +0000 UTC m=+13.835018457 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zg4t7" (UniqueName: "kubernetes.io/projected/198da296-7e5e-4998-9aeb-ac3bff8b14b3-kube-api-access-zg4t7") pod "storage-provisioner" (UID: "198da296-7e5e-4998-9aeb-ac3bff8b14b3") : configmap "kube-root-ca.crt" not found
	Mar 11 20:58:34 running-upgrade-168000 kubelet[15989]: E0311 20:58:34.379719   15989 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Mar 11 20:58:34 running-upgrade-168000 kubelet[15989]: E0311 20:58:34.379732   15989 projected.go:192] Error preparing data for projected volume kube-api-access-hmvk4 for pod kube-system/kube-proxy-6tzf5: configmap "kube-root-ca.crt" not found
	Mar 11 20:58:34 running-upgrade-168000 kubelet[15989]: E0311 20:58:34.379751   15989 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/4c8166dd-f48b-4461-bacb-98f0f73294aa-kube-api-access-hmvk4 podName:4c8166dd-f48b-4461-bacb-98f0f73294aa nodeName:}" failed. No retries permitted until 2024-03-11 20:58:34.879743407 +0000 UTC m=+13.337757322 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hmvk4" (UniqueName: "kubernetes.io/projected/4c8166dd-f48b-4461-bacb-98f0f73294aa-kube-api-access-hmvk4") pod "kube-proxy-6tzf5" (UID: "4c8166dd-f48b-4461-bacb-98f0f73294aa") : configmap "kube-root-ca.crt" not found
	Mar 11 20:58:34 running-upgrade-168000 kubelet[15989]: I0311 20:58:34.669395   15989 topology_manager.go:200] "Topology Admit Handler"
	Mar 11 20:58:34 running-upgrade-168000 kubelet[15989]: I0311 20:58:34.677448   15989 topology_manager.go:200] "Topology Admit Handler"
	Mar 11 20:58:34 running-upgrade-168000 kubelet[15989]: I0311 20:58:34.779575   15989 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq486\" (UniqueName: \"kubernetes.io/projected/e97ca58b-1c98-4dde-89a8-a4f9083ae306-kube-api-access-jq486\") pod \"coredns-6d4b75cb6d-lfv59\" (UID: \"e97ca58b-1c98-4dde-89a8-a4f9083ae306\") " pod="kube-system/coredns-6d4b75cb6d-lfv59"
	Mar 11 20:58:34 running-upgrade-168000 kubelet[15989]: I0311 20:58:34.779669   15989 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e97ca58b-1c98-4dde-89a8-a4f9083ae306-config-volume\") pod \"coredns-6d4b75cb6d-lfv59\" (UID: \"e97ca58b-1c98-4dde-89a8-a4f9083ae306\") " pod="kube-system/coredns-6d4b75cb6d-lfv59"
	Mar 11 20:58:34 running-upgrade-168000 kubelet[15989]: I0311 20:58:34.779709   15989 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7976edf-df97-4dfd-9ee4-aec5270d018a-config-volume\") pod \"coredns-6d4b75cb6d-8rjs4\" (UID: \"f7976edf-df97-4dfd-9ee4-aec5270d018a\") " pod="kube-system/coredns-6d4b75cb6d-8rjs4"
	Mar 11 20:58:34 running-upgrade-168000 kubelet[15989]: I0311 20:58:34.779746   15989 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgcxr\" (UniqueName: \"kubernetes.io/projected/f7976edf-df97-4dfd-9ee4-aec5270d018a-kube-api-access-rgcxr\") pod \"coredns-6d4b75cb6d-8rjs4\" (UID: \"f7976edf-df97-4dfd-9ee4-aec5270d018a\") " pod="kube-system/coredns-6d4b75cb6d-8rjs4"
	Mar 11 21:02:22 running-upgrade-168000 kubelet[15989]: I0311 21:02:22.907096   15989 scope.go:110] "RemoveContainer" containerID="b9719d23f2f13cb91dff299836fbe7c80477e1785d9fa061294bdce38d254f3a"
	Mar 11 21:02:22 running-upgrade-168000 kubelet[15989]: I0311 21:02:22.924772   15989 scope.go:110] "RemoveContainer" containerID="d3c8d3b0c7f16069810d65821b8c8efd46d04da7bd687d86fda337d76003572a"
	
	
	==> storage-provisioner [05067d4bae22] <==
	I0311 20:58:35.628217       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0311 20:58:35.632520       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0311 20:58:35.634897       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0311 20:58:35.639799       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0311 20:58:35.639986       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-168000_fdf4f3f8-0ba8-472a-bf41-bb8563929d43!
	I0311 20:58:35.640383       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cf3031c3-a128-491b-a490-04a58e601003", APIVersion:"v1", ResourceVersion:"371", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-168000_fdf4f3f8-0ba8-472a-bf41-bb8563929d43 became leader
	I0311 20:58:35.740231       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-168000_fdf4f3f8-0ba8-472a-bf41-bb8563929d43!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-168000 -n running-upgrade-168000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-168000 -n running-upgrade-168000: exit status 2 (15.695973083s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-168000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-168000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-168000
--- FAIL: TestRunningBinaryUpgrade (669.49s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.36s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-646000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-646000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.768218959s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-646000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-646000" primary control-plane node in "kubernetes-upgrade-646000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-646000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:51:28.089415    4058 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:51:28.089549    4058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:51:28.089552    4058 out.go:304] Setting ErrFile to fd 2...
	I0311 13:51:28.089558    4058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:51:28.089686    4058 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:51:28.090748    4058 out.go:298] Setting JSON to false
	I0311 13:51:28.106570    4058 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3059,"bootTime":1710187229,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 13:51:28.106652    4058 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 13:51:28.110510    4058 out.go:177] * [kubernetes-upgrade-646000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 13:51:28.122568    4058 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 13:51:28.122622    4058 notify.go:220] Checking for updates...
	I0311 13:51:28.126482    4058 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:51:28.130439    4058 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 13:51:28.131967    4058 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:51:28.135466    4058 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 13:51:28.138479    4058 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 13:51:28.141858    4058 config.go:182] Loaded profile config "multinode-457000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:51:28.141926    4058 config.go:182] Loaded profile config "offline-docker-485000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:51:28.141981    4058 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:51:28.146435    4058 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 13:51:28.153493    4058 start.go:297] selected driver: qemu2
	I0311 13:51:28.153499    4058 start.go:901] validating driver "qemu2" against <nil>
	I0311 13:51:28.153507    4058 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 13:51:28.155816    4058 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 13:51:28.159458    4058 out.go:177] * Automatically selected the socket_vmnet network
	I0311 13:51:28.162613    4058 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 13:51:28.162666    4058 cni.go:84] Creating CNI manager for ""
	I0311 13:51:28.162675    4058 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0311 13:51:28.162704    4058 start.go:340] cluster config:
	{Name:kubernetes-upgrade-646000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-646000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:51:28.167417    4058 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:51:28.174434    4058 out.go:177] * Starting "kubernetes-upgrade-646000" primary control-plane node in "kubernetes-upgrade-646000" cluster
	I0311 13:51:28.178441    4058 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0311 13:51:28.178457    4058 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0311 13:51:28.178469    4058 cache.go:56] Caching tarball of preloaded images
	I0311 13:51:28.178526    4058 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 13:51:28.178532    4058 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0311 13:51:28.178598    4058 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/kubernetes-upgrade-646000/config.json ...
	I0311 13:51:28.178609    4058 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/kubernetes-upgrade-646000/config.json: {Name:mk5d2d72a0095650eb6844b0bc372bf9f4b95dd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:51:28.178824    4058 start.go:360] acquireMachinesLock for kubernetes-upgrade-646000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 13:51:28.178869    4058 start.go:364] duration metric: took 27.75µs to acquireMachinesLock for "kubernetes-upgrade-646000"
	I0311 13:51:28.178883    4058 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-646000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-646000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 13:51:28.178913    4058 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 13:51:28.183466    4058 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 13:51:28.201142    4058 start.go:159] libmachine.API.Create for "kubernetes-upgrade-646000" (driver="qemu2")
	I0311 13:51:28.201166    4058 client.go:168] LocalClient.Create starting
	I0311 13:51:28.201237    4058 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 13:51:28.201267    4058 main.go:141] libmachine: Decoding PEM data...
	I0311 13:51:28.201279    4058 main.go:141] libmachine: Parsing certificate...
	I0311 13:51:28.201320    4058 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 13:51:28.201342    4058 main.go:141] libmachine: Decoding PEM data...
	I0311 13:51:28.201352    4058 main.go:141] libmachine: Parsing certificate...
	I0311 13:51:28.201702    4058 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 13:51:28.338302    4058 main.go:141] libmachine: Creating SSH key...
	I0311 13:51:28.463693    4058 main.go:141] libmachine: Creating Disk image...
	I0311 13:51:28.463699    4058 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 13:51:28.463872    4058 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/disk.qcow2
	I0311 13:51:28.476071    4058 main.go:141] libmachine: STDOUT: 
	I0311 13:51:28.476085    4058 main.go:141] libmachine: STDERR: 
	I0311 13:51:28.476151    4058 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/disk.qcow2 +20000M
	I0311 13:51:28.486842    4058 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 13:51:28.486857    4058 main.go:141] libmachine: STDERR: 
	I0311 13:51:28.486882    4058 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/disk.qcow2
	I0311 13:51:28.486897    4058 main.go:141] libmachine: Starting QEMU VM...
	I0311 13:51:28.486929    4058 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:b7:a0:24:b6:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/disk.qcow2
	I0311 13:51:28.488671    4058 main.go:141] libmachine: STDOUT: 
	I0311 13:51:28.488697    4058 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 13:51:28.488717    4058 client.go:171] duration metric: took 287.554625ms to LocalClient.Create
	I0311 13:51:30.490953    4058 start.go:128] duration metric: took 2.312084959s to createHost
	I0311 13:51:30.491030    4058 start.go:83] releasing machines lock for "kubernetes-upgrade-646000", held for 2.312222375s
	W0311 13:51:30.491075    4058 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:51:30.507235    4058 out.go:177] * Deleting "kubernetes-upgrade-646000" in qemu2 ...
	W0311 13:51:30.532249    4058 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:51:30.532278    4058 start.go:728] Will try again in 5 seconds ...
	I0311 13:51:35.534164    4058 start.go:360] acquireMachinesLock for kubernetes-upgrade-646000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 13:51:35.534242    4058 start.go:364] duration metric: took 60.334µs to acquireMachinesLock for "kubernetes-upgrade-646000"
	I0311 13:51:35.534255    4058 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-646000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-646000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 13:51:35.534297    4058 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 13:51:35.543410    4058 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 13:51:35.558150    4058 start.go:159] libmachine.API.Create for "kubernetes-upgrade-646000" (driver="qemu2")
	I0311 13:51:35.558177    4058 client.go:168] LocalClient.Create starting
	I0311 13:51:35.558239    4058 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 13:51:35.558274    4058 main.go:141] libmachine: Decoding PEM data...
	I0311 13:51:35.558284    4058 main.go:141] libmachine: Parsing certificate...
	I0311 13:51:35.558316    4058 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 13:51:35.558337    4058 main.go:141] libmachine: Decoding PEM data...
	I0311 13:51:35.558343    4058 main.go:141] libmachine: Parsing certificate...
	I0311 13:51:35.558573    4058 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 13:51:35.700762    4058 main.go:141] libmachine: Creating SSH key...
	I0311 13:51:35.760394    4058 main.go:141] libmachine: Creating Disk image...
	I0311 13:51:35.760399    4058 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 13:51:35.760549    4058 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/disk.qcow2
	I0311 13:51:35.773058    4058 main.go:141] libmachine: STDOUT: 
	I0311 13:51:35.773080    4058 main.go:141] libmachine: STDERR: 
	I0311 13:51:35.773139    4058 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/disk.qcow2 +20000M
	I0311 13:51:35.783795    4058 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 13:51:35.783811    4058 main.go:141] libmachine: STDERR: 
	I0311 13:51:35.783823    4058 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/disk.qcow2
	I0311 13:51:35.783828    4058 main.go:141] libmachine: Starting QEMU VM...
	I0311 13:51:35.783863    4058 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:2a:a7:a7:61:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/disk.qcow2
	I0311 13:51:35.785561    4058 main.go:141] libmachine: STDOUT: 
	I0311 13:51:35.785576    4058 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 13:51:35.785589    4058 client.go:171] duration metric: took 227.414959ms to LocalClient.Create
	I0311 13:51:37.787711    4058 start.go:128] duration metric: took 2.253460458s to createHost
	I0311 13:51:37.787787    4058 start.go:83] releasing machines lock for "kubernetes-upgrade-646000", held for 2.253607083s
	W0311 13:51:37.788205    4058 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-646000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-646000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:51:37.799073    4058 out.go:177] 
	W0311 13:51:37.805657    4058 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 13:51:37.805681    4058 out.go:239] * 
	* 
	W0311 13:51:37.807470    4058 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 13:51:37.816834    4058 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-646000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-646000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-646000: (2.107801917s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-646000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-646000 status --format={{.Host}}: exit status 7 (63.753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-646000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-646000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.215015167s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-646000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-646000" primary control-plane node in "kubernetes-upgrade-646000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-646000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-646000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:51:40.033454    4099 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:51:40.033591    4099 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:51:40.033597    4099 out.go:304] Setting ErrFile to fd 2...
	I0311 13:51:40.033599    4099 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:51:40.033734    4099 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:51:40.034757    4099 out.go:298] Setting JSON to false
	I0311 13:51:40.050819    4099 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3071,"bootTime":1710187229,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 13:51:40.050889    4099 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 13:51:40.055989    4099 out.go:177] * [kubernetes-upgrade-646000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 13:51:40.068955    4099 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 13:51:40.064076    4099 notify.go:220] Checking for updates...
	I0311 13:51:40.075012    4099 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:51:40.083009    4099 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 13:51:40.090987    4099 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:51:40.097986    4099 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 13:51:40.105980    4099 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 13:51:40.110240    4099 config.go:182] Loaded profile config "kubernetes-upgrade-646000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0311 13:51:40.110489    4099 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:51:40.113974    4099 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 13:51:40.120925    4099 start.go:297] selected driver: qemu2
	I0311 13:51:40.120930    4099 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-646000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-646000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:51:40.120980    4099 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 13:51:40.123372    4099 cni.go:84] Creating CNI manager for ""
	I0311 13:51:40.123397    4099 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 13:51:40.123420    4099 start.go:340] cluster config:
	{Name:kubernetes-upgrade-646000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-646000 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:51:40.127904    4099 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:51:40.135989    4099 out.go:177] * Starting "kubernetes-upgrade-646000" primary control-plane node in "kubernetes-upgrade-646000" cluster
	I0311 13:51:40.139049    4099 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0311 13:51:40.139066    4099 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0311 13:51:40.139084    4099 cache.go:56] Caching tarball of preloaded images
	I0311 13:51:40.139141    4099 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 13:51:40.139147    4099 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0311 13:51:40.139194    4099 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/kubernetes-upgrade-646000/config.json ...
	I0311 13:51:40.139596    4099 start.go:360] acquireMachinesLock for kubernetes-upgrade-646000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 13:51:40.139635    4099 start.go:364] duration metric: took 32.541µs to acquireMachinesLock for "kubernetes-upgrade-646000"
	I0311 13:51:40.139645    4099 start.go:96] Skipping create...Using existing machine configuration
	I0311 13:51:40.139651    4099 fix.go:54] fixHost starting: 
	I0311 13:51:40.139757    4099 fix.go:112] recreateIfNeeded on kubernetes-upgrade-646000: state=Stopped err=<nil>
	W0311 13:51:40.139766    4099 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 13:51:40.144022    4099 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-646000" ...
	I0311 13:51:40.152019    4099 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:2a:a7:a7:61:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/disk.qcow2
	I0311 13:51:40.154031    4099 main.go:141] libmachine: STDOUT: 
	I0311 13:51:40.154047    4099 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 13:51:40.154077    4099 fix.go:56] duration metric: took 14.426042ms for fixHost
	I0311 13:51:40.154090    4099 start.go:83] releasing machines lock for "kubernetes-upgrade-646000", held for 14.451125ms
	W0311 13:51:40.154097    4099 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 13:51:40.154135    4099 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:51:40.154140    4099 start.go:728] Will try again in 5 seconds ...
	I0311 13:51:45.154854    4099 start.go:360] acquireMachinesLock for kubernetes-upgrade-646000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 13:51:45.155228    4099 start.go:364] duration metric: took 288.292µs to acquireMachinesLock for "kubernetes-upgrade-646000"
	I0311 13:51:45.155399    4099 start.go:96] Skipping create...Using existing machine configuration
	I0311 13:51:45.155422    4099 fix.go:54] fixHost starting: 
	I0311 13:51:45.156073    4099 fix.go:112] recreateIfNeeded on kubernetes-upgrade-646000: state=Stopped err=<nil>
	W0311 13:51:45.156100    4099 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 13:51:45.165573    4099 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-646000" ...
	I0311 13:51:45.169881    4099 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:2a:a7:a7:61:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubernetes-upgrade-646000/disk.qcow2
	I0311 13:51:45.179873    4099 main.go:141] libmachine: STDOUT: 
	I0311 13:51:45.179946    4099 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 13:51:45.180023    4099 fix.go:56] duration metric: took 24.6045ms for fixHost
	I0311 13:51:45.180045    4099 start.go:83] releasing machines lock for "kubernetes-upgrade-646000", held for 24.784792ms
	W0311 13:51:45.180285    4099 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-646000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-646000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 13:51:45.186608    4099 out.go:177] 
	W0311 13:51:45.190678    4099 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 13:51:45.190701    4099 out.go:239] * 
	* 
	W0311 13:51:45.192644    4099 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 13:51:45.202624    4099 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-646000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-646000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-646000 version --output=json: exit status 1 (60.121167ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-646000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-03-11 13:51:45.278627 -0700 PDT m=+2528.601528209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-646000 -n kubernetes-upgrade-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-646000 -n kubernetes-upgrade-646000: exit status 7 (35.597542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-646000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-646000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-646000
--- FAIL: TestKubernetesUpgrade (17.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (617.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3141852415 start -p stopped-upgrade-517000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3141852415 start -p stopped-upgrade-517000 --memory=2200 --vm-driver=qemu2 : (1m22.639622209s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3141852415 -p stopped-upgrade-517000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3141852415 -p stopped-upgrade-517000 stop: (12.095072041s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-517000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-517000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.62133075s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-517000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-517000" primary control-plane node in "stopped-upgrade-517000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-517000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:53:15.292503    4147 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:53:15.292649    4147 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:53:15.292652    4147 out.go:304] Setting ErrFile to fd 2...
	I0311 13:53:15.292654    4147 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:53:15.292785    4147 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:53:15.293865    4147 out.go:298] Setting JSON to false
	I0311 13:53:15.312752    4147 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3166,"bootTime":1710187229,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 13:53:15.312821    4147 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 13:53:15.317058    4147 out.go:177] * [stopped-upgrade-517000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 13:53:15.325094    4147 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 13:53:15.325166    4147 notify.go:220] Checking for updates...
	I0311 13:53:15.331940    4147 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:53:15.341926    4147 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 13:53:15.354052    4147 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:53:15.361044    4147 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 13:53:15.371114    4147 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 13:53:15.375430    4147 config.go:182] Loaded profile config "stopped-upgrade-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0311 13:53:15.380099    4147 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0311 13:53:15.382971    4147 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:53:15.389069    4147 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 13:53:15.396047    4147 start.go:297] selected driver: qemu2
	I0311 13:53:15.396053    4147 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-517000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50301 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-517000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0311 13:53:15.396112    4147 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 13:53:15.398833    4147 cni.go:84] Creating CNI manager for ""
	I0311 13:53:15.398852    4147 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 13:53:15.398889    4147 start.go:340] cluster config:
	{Name:stopped-upgrade-517000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50301 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-517000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0311 13:53:15.398952    4147 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:53:15.406078    4147 out.go:177] * Starting "stopped-upgrade-517000" primary control-plane node in "stopped-upgrade-517000" cluster
	I0311 13:53:15.410051    4147 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0311 13:53:15.410083    4147 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0311 13:53:15.410098    4147 cache.go:56] Caching tarball of preloaded images
	I0311 13:53:15.410186    4147 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 13:53:15.410194    4147 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0311 13:53:15.410240    4147 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/stopped-upgrade-517000/config.json ...
	I0311 13:53:15.410519    4147 start.go:360] acquireMachinesLock for stopped-upgrade-517000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 13:53:15.410556    4147 start.go:364] duration metric: took 28.459µs to acquireMachinesLock for "stopped-upgrade-517000"
	I0311 13:53:15.410566    4147 start.go:96] Skipping create...Using existing machine configuration
	I0311 13:53:15.410570    4147 fix.go:54] fixHost starting: 
	I0311 13:53:15.410682    4147 fix.go:112] recreateIfNeeded on stopped-upgrade-517000: state=Stopped err=<nil>
	W0311 13:53:15.410690    4147 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 13:53:15.417978    4147 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-517000" ...
	I0311 13:53:15.421245    4147 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/stopped-upgrade-517000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/stopped-upgrade-517000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/stopped-upgrade-517000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50269-:22,hostfwd=tcp::50270-:2376,hostname=stopped-upgrade-517000 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/stopped-upgrade-517000/disk.qcow2
	I0311 13:53:15.475759    4147 main.go:141] libmachine: STDOUT: 
	I0311 13:53:15.475819    4147 main.go:141] libmachine: STDERR: 
	I0311 13:53:15.475832    4147 main.go:141] libmachine: Waiting for VM to start (ssh -p 50269 docker@127.0.0.1)...
	I0311 13:53:35.151994    4147 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/stopped-upgrade-517000/config.json ...
	I0311 13:53:35.152287    4147 machine.go:94] provisionDockerMachine start ...
	I0311 13:53:35.152341    4147 main.go:141] libmachine: Using SSH client type: native
	I0311 13:53:35.152486    4147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aa5a90] 0x102aa82f0 <nil>  [] 0s} localhost 50269 <nil> <nil>}
	I0311 13:53:35.152493    4147 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 13:53:35.220147    4147 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0311 13:53:35.220163    4147 buildroot.go:166] provisioning hostname "stopped-upgrade-517000"
	I0311 13:53:35.220225    4147 main.go:141] libmachine: Using SSH client type: native
	I0311 13:53:35.220339    4147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aa5a90] 0x102aa82f0 <nil>  [] 0s} localhost 50269 <nil> <nil>}
	I0311 13:53:35.220347    4147 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-517000 && echo "stopped-upgrade-517000" | sudo tee /etc/hostname
	I0311 13:53:35.288351    4147 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-517000
	
	I0311 13:53:35.288412    4147 main.go:141] libmachine: Using SSH client type: native
	I0311 13:53:35.288550    4147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aa5a90] 0x102aa82f0 <nil>  [] 0s} localhost 50269 <nil> <nil>}
	I0311 13:53:35.288558    4147 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-517000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-517000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-517000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 13:53:35.353891    4147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 13:53:35.353905    4147 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18358-1220/.minikube CaCertPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18358-1220/.minikube}
	I0311 13:53:35.353916    4147 buildroot.go:174] setting up certificates
	I0311 13:53:35.353920    4147 provision.go:84] configureAuth start
	I0311 13:53:35.353925    4147 provision.go:143] copyHostCerts
	I0311 13:53:35.353988    4147 exec_runner.go:144] found /Users/jenkins/minikube-integration/18358-1220/.minikube/ca.pem, removing ...
	I0311 13:53:35.353995    4147 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18358-1220/.minikube/ca.pem
	I0311 13:53:35.354103    4147 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18358-1220/.minikube/ca.pem (1082 bytes)
	I0311 13:53:35.354290    4147 exec_runner.go:144] found /Users/jenkins/minikube-integration/18358-1220/.minikube/cert.pem, removing ...
	I0311 13:53:35.354295    4147 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18358-1220/.minikube/cert.pem
	I0311 13:53:35.354333    4147 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18358-1220/.minikube/cert.pem (1123 bytes)
	I0311 13:53:35.354444    4147 exec_runner.go:144] found /Users/jenkins/minikube-integration/18358-1220/.minikube/key.pem, removing ...
	I0311 13:53:35.354447    4147 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18358-1220/.minikube/key.pem
	I0311 13:53:35.354602    4147 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18358-1220/.minikube/key.pem (1675 bytes)
	I0311 13:53:35.354700    4147 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-517000 san=[127.0.0.1 localhost minikube stopped-upgrade-517000]
	I0311 13:53:35.424009    4147 provision.go:177] copyRemoteCerts
	I0311 13:53:35.424044    4147 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 13:53:35.424052    4147 sshutil.go:53] new ssh client: &{IP:localhost Port:50269 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/stopped-upgrade-517000/id_rsa Username:docker}
	I0311 13:53:35.457264    4147 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 13:53:35.463896    4147 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0311 13:53:35.470618    4147 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0311 13:53:35.477766    4147 provision.go:87] duration metric: took 123.842125ms to configureAuth
	I0311 13:53:35.477775    4147 buildroot.go:189] setting minikube options for container-runtime
	I0311 13:53:35.477870    4147 config.go:182] Loaded profile config "stopped-upgrade-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0311 13:53:35.477908    4147 main.go:141] libmachine: Using SSH client type: native
	I0311 13:53:35.478001    4147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aa5a90] 0x102aa82f0 <nil>  [] 0s} localhost 50269 <nil> <nil>}
	I0311 13:53:35.478005    4147 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0311 13:53:35.535928    4147 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0311 13:53:35.535937    4147 buildroot.go:70] root file system type: tmpfs
	I0311 13:53:35.535994    4147 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0311 13:53:35.536040    4147 main.go:141] libmachine: Using SSH client type: native
	I0311 13:53:35.536142    4147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aa5a90] 0x102aa82f0 <nil>  [] 0s} localhost 50269 <nil> <nil>}
	I0311 13:53:35.536174    4147 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0311 13:53:35.599181    4147 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0311 13:53:35.599227    4147 main.go:141] libmachine: Using SSH client type: native
	I0311 13:53:35.599327    4147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aa5a90] 0x102aa82f0 <nil>  [] 0s} localhost 50269 <nil> <nil>}
	I0311 13:53:35.599335    4147 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0311 13:53:35.983312    4147 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0311 13:53:35.983326    4147 machine.go:97] duration metric: took 831.059166ms to provisionDockerMachine
	I0311 13:53:35.983332    4147 start.go:293] postStartSetup for "stopped-upgrade-517000" (driver="qemu2")
	I0311 13:53:35.983339    4147 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 13:53:35.983407    4147 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 13:53:35.983417    4147 sshutil.go:53] new ssh client: &{IP:localhost Port:50269 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/stopped-upgrade-517000/id_rsa Username:docker}
	I0311 13:53:36.017227    4147 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 13:53:36.018424    4147 info.go:137] Remote host: Buildroot 2021.02.12
	I0311 13:53:36.018431    4147 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18358-1220/.minikube/addons for local assets ...
	I0311 13:53:36.018504    4147 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18358-1220/.minikube/files for local assets ...
	I0311 13:53:36.018584    4147 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18358-1220/.minikube/files/etc/ssl/certs/16522.pem -> 16522.pem in /etc/ssl/certs
	I0311 13:53:36.018669    4147 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0311 13:53:36.021349    4147 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/files/etc/ssl/certs/16522.pem --> /etc/ssl/certs/16522.pem (1708 bytes)
	I0311 13:53:36.028148    4147 start.go:296] duration metric: took 44.812084ms for postStartSetup
	I0311 13:53:36.028161    4147 fix.go:56] duration metric: took 20.618254375s for fixHost
	I0311 13:53:36.028193    4147 main.go:141] libmachine: Using SSH client type: native
	I0311 13:53:36.028294    4147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102aa5a90] 0x102aa82f0 <nil>  [] 0s} localhost 50269 <nil> <nil>}
	I0311 13:53:36.028299    4147 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0311 13:53:36.087977    4147 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710190416.576050045
	
	I0311 13:53:36.087988    4147 fix.go:216] guest clock: 1710190416.576050045
	I0311 13:53:36.087993    4147 fix.go:229] Guest: 2024-03-11 13:53:36.576050045 -0700 PDT Remote: 2024-03-11 13:53:36.028163 -0700 PDT m=+20.761514293 (delta=547.887045ms)
	I0311 13:53:36.088036    4147 fix.go:200] guest clock delta is within tolerance: 547.887045ms
	I0311 13:53:36.088042    4147 start.go:83] releasing machines lock for "stopped-upgrade-517000", held for 20.678144792s
	I0311 13:53:36.088109    4147 ssh_runner.go:195] Run: cat /version.json
	I0311 13:53:36.088119    4147 sshutil.go:53] new ssh client: &{IP:localhost Port:50269 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/stopped-upgrade-517000/id_rsa Username:docker}
	I0311 13:53:36.088172    4147 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 13:53:36.088214    4147 sshutil.go:53] new ssh client: &{IP:localhost Port:50269 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/stopped-upgrade-517000/id_rsa Username:docker}
	W0311 13:53:36.088834    4147 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:50461->127.0.0.1:50269: write: broken pipe
	I0311 13:53:36.088852    4147 retry.go:31] will retry after 365.755394ms: ssh: handshake failed: write tcp 127.0.0.1:50461->127.0.0.1:50269: write: broken pipe
	W0311 13:53:36.487868    4147 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0311 13:53:36.487937    4147 ssh_runner.go:195] Run: systemctl --version
	I0311 13:53:36.489806    4147 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0311 13:53:36.491410    4147 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0311 13:53:36.491441    4147 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0311 13:53:36.494548    4147 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0311 13:53:36.499335    4147 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0311 13:53:36.499351    4147 start.go:494] detecting cgroup driver to use...
	I0311 13:53:36.499433    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 13:53:36.506369    4147 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0311 13:53:36.509942    4147 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0311 13:53:36.513522    4147 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0311 13:53:36.513545    4147 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0311 13:53:36.516953    4147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0311 13:53:36.519889    4147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0311 13:53:36.522942    4147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0311 13:53:36.526393    4147 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 13:53:36.529960    4147 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0311 13:53:36.533207    4147 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 13:53:36.535818    4147 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 13:53:36.539002    4147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:53:36.602459    4147 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0311 13:53:36.609405    4147 start.go:494] detecting cgroup driver to use...
	I0311 13:53:36.609513    4147 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0311 13:53:36.615953    4147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 13:53:36.620610    4147 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0311 13:53:36.627250    4147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0311 13:53:36.632103    4147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0311 13:53:36.636877    4147 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0311 13:53:36.674326    4147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0311 13:53:36.679729    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 13:53:36.685199    4147 ssh_runner.go:195] Run: which cri-dockerd
	I0311 13:53:36.686491    4147 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0311 13:53:36.689568    4147 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0311 13:53:36.694862    4147 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0311 13:53:36.757782    4147 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0311 13:53:36.825645    4147 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0311 13:53:36.825707    4147 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0311 13:53:36.831142    4147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:53:36.913103    4147 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0311 13:53:38.034688    4147 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.121605833s)
	I0311 13:53:38.034763    4147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0311 13:53:38.043056    4147 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0311 13:53:38.049344    4147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0311 13:53:38.054463    4147 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0311 13:53:38.117286    4147 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0311 13:53:38.194297    4147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:53:38.255565    4147 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0311 13:53:38.261370    4147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0311 13:53:38.265559    4147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:53:38.329122    4147 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0311 13:53:38.368320    4147 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0311 13:53:38.368400    4147 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0311 13:53:38.370536    4147 start.go:562] Will wait 60s for crictl version
	I0311 13:53:38.370574    4147 ssh_runner.go:195] Run: which crictl
	I0311 13:53:38.371909    4147 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 13:53:38.386134    4147 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0311 13:53:38.386208    4147 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0311 13:53:38.401799    4147 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0311 13:53:38.422653    4147 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0311 13:53:38.422718    4147 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0311 13:53:38.424161    4147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 13:53:38.427590    4147 kubeadm.go:877] updating cluster {Name:stopped-upgrade-517000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50301 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-517000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0311 13:53:38.427633    4147 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0311 13:53:38.427674    4147 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0311 13:53:38.442179    4147 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0311 13:53:38.442187    4147 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0311 13:53:38.442229    4147 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0311 13:53:38.446053    4147 ssh_runner.go:195] Run: which lz4
	I0311 13:53:38.447402    4147 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0311 13:53:38.448714    4147 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0311 13:53:38.448728    4147 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0311 13:53:39.130055    4147 docker.go:649] duration metric: took 682.703667ms to copy over tarball
	I0311 13:53:39.130118    4147 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0311 13:53:40.295337    4147 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.16523975s)
	I0311 13:53:40.297498    4147 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0311 13:53:40.313935    4147 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0311 13:53:40.317400    4147 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0311 13:53:40.322630    4147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:53:40.400571    4147 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0311 13:53:41.892812    4147 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.492271s)
	I0311 13:53:41.892894    4147 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0311 13:53:41.910327    4147 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0311 13:53:41.910339    4147 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0311 13:53:41.910345    4147 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0311 13:53:41.917012    4147 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0311 13:53:41.917019    4147 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0311 13:53:41.917063    4147 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 13:53:41.917169    4147 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0311 13:53:41.917197    4147 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0311 13:53:41.917230    4147 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0311 13:53:41.917274    4147 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0311 13:53:41.917533    4147 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 13:53:41.924879    4147 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 13:53:41.924967    4147 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0311 13:53:41.925014    4147 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0311 13:53:41.925710    4147 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0311 13:53:41.925767    4147 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0311 13:53:41.925775    4147 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 13:53:41.925808    4147 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0311 13:53:41.925851    4147 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0311 13:53:43.852542    4147 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0311 13:53:43.889421    4147 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0311 13:53:43.889469    4147 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0311 13:53:43.889559    4147 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0311 13:53:43.909820    4147 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0311 13:53:43.919989    4147 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0311 13:53:43.939623    4147 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0311 13:53:43.939646    4147 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0311 13:53:43.939708    4147 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0311 13:53:43.952427    4147 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0311 13:53:43.953548    4147 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 13:53:43.964262    4147 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0311 13:53:43.964283    4147 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 13:53:43.964337    4147 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0311 13:53:43.974052    4147 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	W0311 13:53:43.987067    4147 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0311 13:53:43.987177    4147 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0311 13:53:43.987722    4147 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0311 13:53:43.988683    4147 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0311 13:53:43.997359    4147 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0311 13:53:44.000444    4147 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0311 13:53:44.000464    4147 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0311 13:53:44.000476    4147 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0311 13:53:44.000488    4147 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0311 13:53:44.000515    4147 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0311 13:53:44.000515    4147 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0311 13:53:44.009678    4147 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0311 13:53:44.009701    4147 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0311 13:53:44.009757    4147 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0311 13:53:44.016561    4147 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0311 13:53:44.016586    4147 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0311 13:53:44.016637    4147 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0311 13:53:44.025086    4147 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0311 13:53:44.025087    4147 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0311 13:53:44.025195    4147 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0311 13:53:44.031627    4147 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0311 13:53:44.033344    4147 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0311 13:53:44.033354    4147 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0311 13:53:44.033368    4147 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0311 13:53:44.033433    4147 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0311 13:53:44.048438    4147 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0311 13:53:44.048463    4147 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0311 13:53:44.066357    4147 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0311 13:53:44.066373    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0311 13:53:44.099881    4147 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0311 13:53:44.099902    4147 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0311 13:53:44.099908    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0311 13:53:44.137193    4147 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0311 13:53:44.489674    4147 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0311 13:53:44.490279    4147 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 13:53:44.529804    4147 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0311 13:53:44.529843    4147 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 13:53:44.529942    4147 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 13:53:44.556877    4147 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0311 13:53:44.557038    4147 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0311 13:53:44.559007    4147 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0311 13:53:44.559025    4147 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0311 13:53:44.589943    4147 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0311 13:53:44.589955    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0311 13:53:44.821276    4147 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0311 13:53:44.821324    4147 cache_images.go:92] duration metric: took 2.911064917s to LoadCachedImages
	W0311 13:53:44.821368    4147 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0311 13:53:44.821376    4147 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0311 13:53:44.821425    4147 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-517000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-517000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 13:53:44.821493    4147 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0311 13:53:44.835372    4147 cni.go:84] Creating CNI manager for ""
	I0311 13:53:44.835384    4147 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 13:53:44.835389    4147 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 13:53:44.835398    4147 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-517000 NodeName:stopped-upgrade-517000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 13:53:44.835464    4147 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-517000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 13:53:44.835527    4147 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0311 13:53:44.838658    4147 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 13:53:44.838687    4147 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 13:53:44.841718    4147 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0311 13:53:44.846761    4147 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 13:53:44.851923    4147 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0311 13:53:44.857177    4147 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0311 13:53:44.858344    4147 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 13:53:44.862014    4147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:53:44.923647    4147 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 13:53:44.935896    4147 certs.go:68] Setting up /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/stopped-upgrade-517000 for IP: 10.0.2.15
	I0311 13:53:44.935906    4147 certs.go:194] generating shared ca certs ...
	I0311 13:53:44.935915    4147 certs.go:226] acquiring lock for ca certs: {Name:mkd7f96dc3b50acb1e4b9ffed31996dfe6eec0f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:53:44.936073    4147 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/ca.key
	I0311 13:53:44.936108    4147 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/proxy-client-ca.key
	I0311 13:53:44.936112    4147 certs.go:256] generating profile certs ...
	I0311 13:53:44.936172    4147 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/stopped-upgrade-517000/client.key
	I0311 13:53:44.936189    4147 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/stopped-upgrade-517000/apiserver.key.3ccef596
	I0311 13:53:44.936196    4147 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/stopped-upgrade-517000/apiserver.crt.3ccef596 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0311 13:53:45.123363    4147 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/stopped-upgrade-517000/apiserver.crt.3ccef596 ...
	I0311 13:53:45.123378    4147 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/stopped-upgrade-517000/apiserver.crt.3ccef596: {Name:mkdfbd5dad05bb1c61d00c4bc6540db8bf87e4f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:53:45.123662    4147 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/stopped-upgrade-517000/apiserver.key.3ccef596 ...
	I0311 13:53:45.123669    4147 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/stopped-upgrade-517000/apiserver.key.3ccef596: {Name:mk4fd2fa757577e50443a53299e596e28a85f71b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:53:45.123807    4147 certs.go:381] copying /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/stopped-upgrade-517000/apiserver.crt.3ccef596 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/stopped-upgrade-517000/apiserver.crt
	I0311 13:53:45.123931    4147 certs.go:385] copying /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/stopped-upgrade-517000/apiserver.key.3ccef596 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/stopped-upgrade-517000/apiserver.key
	I0311 13:53:45.124077    4147 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/stopped-upgrade-517000/proxy-client.key
	I0311 13:53:45.124227    4147 certs.go:484] found cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/1652.pem (1338 bytes)
	W0311 13:53:45.124256    4147 certs.go:480] ignoring /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/1652_empty.pem, impossibly tiny 0 bytes
	I0311 13:53:45.124263    4147 certs.go:484] found cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca-key.pem (1679 bytes)
	I0311 13:53:45.124288    4147 certs.go:484] found cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem (1082 bytes)
	I0311 13:53:45.124315    4147 certs.go:484] found cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem (1123 bytes)
	I0311 13:53:45.124355    4147 certs.go:484] found cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/key.pem (1675 bytes)
	I0311 13:53:45.124405    4147 certs.go:484] found cert: /Users/jenkins/minikube-integration/18358-1220/.minikube/files/etc/ssl/certs/16522.pem (1708 bytes)
	I0311 13:53:45.124747    4147 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 13:53:45.135942    4147 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 13:53:45.145262    4147 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 13:53:45.154584    4147 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0311 13:53:45.164418    4147 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/stopped-upgrade-517000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0311 13:53:45.176788    4147 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/stopped-upgrade-517000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0311 13:53:45.188481    4147 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/stopped-upgrade-517000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 13:53:45.198444    4147 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/stopped-upgrade-517000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0311 13:53:45.205719    4147 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 13:53:45.213201    4147 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/1652.pem --> /usr/share/ca-certificates/1652.pem (1338 bytes)
	I0311 13:53:45.220828    4147 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18358-1220/.minikube/files/etc/ssl/certs/16522.pem --> /usr/share/ca-certificates/16522.pem (1708 bytes)
	I0311 13:53:45.227950    4147 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 13:53:45.233576    4147 ssh_runner.go:195] Run: openssl version
	I0311 13:53:45.235897    4147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1652.pem && ln -fs /usr/share/ca-certificates/1652.pem /etc/ssl/certs/1652.pem"
	I0311 13:53:45.239077    4147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1652.pem
	I0311 13:53:45.240824    4147 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 20:18 /usr/share/ca-certificates/1652.pem
	I0311 13:53:45.240864    4147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1652.pem
	I0311 13:53:45.242893    4147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1652.pem /etc/ssl/certs/51391683.0"
	I0311 13:53:45.246637    4147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16522.pem && ln -fs /usr/share/ca-certificates/16522.pem /etc/ssl/certs/16522.pem"
	I0311 13:53:45.250382    4147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16522.pem
	I0311 13:53:45.252227    4147 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 20:18 /usr/share/ca-certificates/16522.pem
	I0311 13:53:45.252266    4147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16522.pem
	I0311 13:53:45.254305    4147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16522.pem /etc/ssl/certs/3ec20f2e.0"
	I0311 13:53:45.258458    4147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 13:53:45.263255    4147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:53:45.265173    4147 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 20:11 /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:53:45.265205    4147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 13:53:45.267422    4147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 13:53:45.271161    4147 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 13:53:45.273407    4147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0311 13:53:45.275972    4147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0311 13:53:45.278086    4147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0311 13:53:45.280547    4147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0311 13:53:45.282667    4147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0311 13:53:45.284756    4147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0311 13:53:45.286936    4147 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-517000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50301 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-517000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0311 13:53:45.287026    4147 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0311 13:53:45.302004    4147 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0311 13:53:45.305745    4147 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0311 13:53:45.305755    4147 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0311 13:53:45.305757    4147 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0311 13:53:45.305804    4147 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0311 13:53:45.308816    4147 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0311 13:53:45.309070    4147 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-517000" does not appear in /Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:53:45.309117    4147 kubeconfig.go:62] /Users/jenkins/minikube-integration/18358-1220/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-517000" cluster setting kubeconfig missing "stopped-upgrade-517000" context setting]
	I0311 13:53:45.309266    4147 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/kubeconfig: {Name:mkd61d3fa94ba0392c00bb2cce43bcec89e45a02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:53:45.309844    4147 kapi.go:59] client config for stopped-upgrade-517000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/stopped-upgrade-517000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/stopped-upgrade-517000/client.key", CAFile:"/Users/jenkins/minikube-integration/18358-1220/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103d93fd0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0311 13:53:45.310181    4147 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0311 13:53:45.313386    4147 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-517000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0311 13:53:45.313396    4147 kubeadm.go:1153] stopping kube-system containers ...
	I0311 13:53:45.313457    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0311 13:53:45.327100    4147 docker.go:483] Stopping containers: [77caab578172 d2bff02933cb d64c5c8a52ca 79e4869035fa 933169a20d31 9334a90391c2 b77168ce6e68 6f1c24dc1388]
	I0311 13:53:45.327178    4147 ssh_runner.go:195] Run: docker stop 77caab578172 d2bff02933cb d64c5c8a52ca 79e4869035fa 933169a20d31 9334a90391c2 b77168ce6e68 6f1c24dc1388
	I0311 13:53:45.338190    4147 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0311 13:53:45.344212    4147 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 13:53:45.347000    4147 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 13:53:45.347007    4147 kubeadm.go:156] found existing configuration files:
	
	I0311 13:53:45.347031    4147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/admin.conf
	I0311 13:53:45.350122    4147 kubeadm.go:162] "https://control-plane.minikube.internal:50301" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 13:53:45.350154    4147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 13:53:45.353080    4147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/kubelet.conf
	I0311 13:53:45.355509    4147 kubeadm.go:162] "https://control-plane.minikube.internal:50301" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 13:53:45.355580    4147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 13:53:45.358565    4147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/controller-manager.conf
	I0311 13:53:45.361614    4147 kubeadm.go:162] "https://control-plane.minikube.internal:50301" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 13:53:45.361640    4147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 13:53:45.364501    4147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/scheduler.conf
	I0311 13:53:45.367004    4147 kubeadm.go:162] "https://control-plane.minikube.internal:50301" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 13:53:45.367024    4147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 13:53:45.369960    4147 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 13:53:45.372984    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 13:53:45.396253    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 13:53:45.790995    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0311 13:53:45.904526    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0311 13:53:45.931879    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0311 13:53:45.955081    4147 api_server.go:52] waiting for apiserver process to appear ...
	I0311 13:53:45.955654    4147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 13:53:46.455454    4147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 13:53:46.957300    4147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 13:53:46.962082    4147 api_server.go:72] duration metric: took 1.007033875s to wait for apiserver process to appear ...
	I0311 13:53:46.962094    4147 api_server.go:88] waiting for apiserver healthz status ...
	I0311 13:53:46.962103    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:53:51.964187    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:53:51.964285    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:53:56.965220    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:53:56.965241    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:54:01.965922    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:54:01.965946    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:54:06.966642    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:54:06.966721    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:54:11.968002    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:54:11.968045    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:54:16.968725    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:54:16.968751    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:54:21.970497    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:54:21.970533    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:54:26.972599    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:54:26.972638    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:54:31.974697    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:54:31.974735    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:54:36.976800    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:54:36.976841    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:54:41.979044    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:54:41.979107    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:54:46.981298    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:54:46.981400    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:54:46.992558    4147 logs.go:276] 2 containers: [13edbf8d8a09 79e4869035fa]
	I0311 13:54:46.992638    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:54:47.003484    4147 logs.go:276] 2 containers: [032c1ac7c6a6 77caab578172]
	I0311 13:54:47.003562    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:54:47.017820    4147 logs.go:276] 1 containers: [ae741f62ad7c]
	I0311 13:54:47.017884    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:54:47.028628    4147 logs.go:276] 2 containers: [4151959a3f28 d64c5c8a52ca]
	I0311 13:54:47.028700    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:54:47.038573    4147 logs.go:276] 1 containers: [66941007e29d]
	I0311 13:54:47.038649    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:54:47.049053    4147 logs.go:276] 2 containers: [b61bca5e426d d2bff02933cb]
	I0311 13:54:47.049127    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:54:47.059118    4147 logs.go:276] 0 containers: []
	W0311 13:54:47.059130    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:54:47.059189    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:54:47.069350    4147 logs.go:276] 1 containers: [5aa3331a9bc9]
	I0311 13:54:47.069369    4147 logs.go:123] Gathering logs for kube-apiserver [79e4869035fa] ...
	I0311 13:54:47.069375    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79e4869035fa"
	I0311 13:54:47.110059    4147 logs.go:123] Gathering logs for etcd [032c1ac7c6a6] ...
	I0311 13:54:47.110070    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032c1ac7c6a6"
	I0311 13:54:47.123823    4147 logs.go:123] Gathering logs for coredns [ae741f62ad7c] ...
	I0311 13:54:47.123833    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae741f62ad7c"
	I0311 13:54:47.135946    4147 logs.go:123] Gathering logs for kube-proxy [66941007e29d] ...
	I0311 13:54:47.135961    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66941007e29d"
	I0311 13:54:47.147426    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:54:47.147443    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:54:47.151666    4147 logs.go:123] Gathering logs for etcd [77caab578172] ...
	I0311 13:54:47.151675    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77caab578172"
	I0311 13:54:47.168531    4147 logs.go:123] Gathering logs for kube-scheduler [d64c5c8a52ca] ...
	I0311 13:54:47.168542    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64c5c8a52ca"
	I0311 13:54:47.185161    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:54:47.185176    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:54:47.197661    4147 logs.go:123] Gathering logs for kube-scheduler [4151959a3f28] ...
	I0311 13:54:47.197677    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4151959a3f28"
	I0311 13:54:47.209311    4147 logs.go:123] Gathering logs for kube-controller-manager [b61bca5e426d] ...
	I0311 13:54:47.209322    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b61bca5e426d"
	I0311 13:54:47.231351    4147 logs.go:123] Gathering logs for kube-controller-manager [d2bff02933cb] ...
	I0311 13:54:47.231368    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bff02933cb"
	I0311 13:54:47.246732    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:54:47.246742    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:54:47.283452    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:54:47.283463    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:54:47.390120    4147 logs.go:123] Gathering logs for kube-apiserver [13edbf8d8a09] ...
	I0311 13:54:47.390134    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13edbf8d8a09"
	I0311 13:54:47.408547    4147 logs.go:123] Gathering logs for storage-provisioner [5aa3331a9bc9] ...
	I0311 13:54:47.408560    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa3331a9bc9"
	I0311 13:54:47.423959    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:54:47.423971    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:54:49.949601    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:54:54.950748    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:54:54.951031    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:54:54.974563    4147 logs.go:276] 2 containers: [13edbf8d8a09 79e4869035fa]
	I0311 13:54:54.974666    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:54:54.989423    4147 logs.go:276] 2 containers: [032c1ac7c6a6 77caab578172]
	I0311 13:54:54.989491    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:54:55.002098    4147 logs.go:276] 1 containers: [ae741f62ad7c]
	I0311 13:54:55.002177    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:54:55.012623    4147 logs.go:276] 2 containers: [4151959a3f28 d64c5c8a52ca]
	I0311 13:54:55.012688    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:54:55.022933    4147 logs.go:276] 1 containers: [66941007e29d]
	I0311 13:54:55.023006    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:54:55.033274    4147 logs.go:276] 2 containers: [b61bca5e426d d2bff02933cb]
	I0311 13:54:55.033340    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:54:55.043029    4147 logs.go:276] 0 containers: []
	W0311 13:54:55.043041    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:54:55.043105    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:54:55.053158    4147 logs.go:276] 1 containers: [5aa3331a9bc9]
	I0311 13:54:55.053172    4147 logs.go:123] Gathering logs for kube-controller-manager [b61bca5e426d] ...
	I0311 13:54:55.053178    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b61bca5e426d"
	I0311 13:54:55.069917    4147 logs.go:123] Gathering logs for kube-controller-manager [d2bff02933cb] ...
	I0311 13:54:55.069927    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bff02933cb"
	I0311 13:54:55.085492    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:54:55.085503    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:54:55.097920    4147 logs.go:123] Gathering logs for kube-proxy [66941007e29d] ...
	I0311 13:54:55.097934    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66941007e29d"
	I0311 13:54:55.113505    4147 logs.go:123] Gathering logs for etcd [032c1ac7c6a6] ...
	I0311 13:54:55.113514    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032c1ac7c6a6"
	I0311 13:54:55.127555    4147 logs.go:123] Gathering logs for coredns [ae741f62ad7c] ...
	I0311 13:54:55.127568    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae741f62ad7c"
	I0311 13:54:55.143233    4147 logs.go:123] Gathering logs for kube-scheduler [4151959a3f28] ...
	I0311 13:54:55.143245    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4151959a3f28"
	I0311 13:54:55.155095    4147 logs.go:123] Gathering logs for storage-provisioner [5aa3331a9bc9] ...
	I0311 13:54:55.155106    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa3331a9bc9"
	I0311 13:54:55.166389    4147 logs.go:123] Gathering logs for kube-apiserver [13edbf8d8a09] ...
	I0311 13:54:55.166398    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13edbf8d8a09"
	I0311 13:54:55.184279    4147 logs.go:123] Gathering logs for etcd [77caab578172] ...
	I0311 13:54:55.184291    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77caab578172"
	I0311 13:54:55.198738    4147 logs.go:123] Gathering logs for kube-scheduler [d64c5c8a52ca] ...
	I0311 13:54:55.198750    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64c5c8a52ca"
	I0311 13:54:55.214258    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:54:55.214269    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:54:55.238955    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:54:55.238963    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:54:55.243565    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:54:55.243573    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:54:55.279523    4147 logs.go:123] Gathering logs for kube-apiserver [79e4869035fa] ...
	I0311 13:54:55.279537    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79e4869035fa"
	I0311 13:54:55.317086    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:54:55.317097    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:54:57.857338    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:55:02.858673    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:55:02.858969    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:55:02.886219    4147 logs.go:276] 2 containers: [13edbf8d8a09 79e4869035fa]
	I0311 13:55:02.886322    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:55:02.901716    4147 logs.go:276] 2 containers: [032c1ac7c6a6 77caab578172]
	I0311 13:55:02.901792    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:55:02.914718    4147 logs.go:276] 1 containers: [ae741f62ad7c]
	I0311 13:55:02.914786    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:55:02.929784    4147 logs.go:276] 2 containers: [4151959a3f28 d64c5c8a52ca]
	I0311 13:55:02.929860    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:55:02.940227    4147 logs.go:276] 1 containers: [66941007e29d]
	I0311 13:55:02.940291    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:55:02.951745    4147 logs.go:276] 2 containers: [b61bca5e426d d2bff02933cb]
	I0311 13:55:02.951815    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:55:02.961597    4147 logs.go:276] 0 containers: []
	W0311 13:55:02.961611    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:55:02.961679    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:55:02.974645    4147 logs.go:276] 1 containers: [5aa3331a9bc9]
	I0311 13:55:02.974663    4147 logs.go:123] Gathering logs for coredns [ae741f62ad7c] ...
	I0311 13:55:02.974669    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae741f62ad7c"
	I0311 13:55:02.986621    4147 logs.go:123] Gathering logs for kube-scheduler [d64c5c8a52ca] ...
	I0311 13:55:02.986634    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64c5c8a52ca"
	I0311 13:55:03.002601    4147 logs.go:123] Gathering logs for kube-controller-manager [d2bff02933cb] ...
	I0311 13:55:03.002612    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bff02933cb"
	I0311 13:55:03.017754    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:55:03.017766    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:55:03.042608    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:55:03.042617    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:55:03.078872    4147 logs.go:123] Gathering logs for kube-apiserver [13edbf8d8a09] ...
	I0311 13:55:03.078883    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13edbf8d8a09"
	I0311 13:55:03.096541    4147 logs.go:123] Gathering logs for storage-provisioner [5aa3331a9bc9] ...
	I0311 13:55:03.096553    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa3331a9bc9"
	I0311 13:55:03.108040    4147 logs.go:123] Gathering logs for kube-apiserver [79e4869035fa] ...
	I0311 13:55:03.108051    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79e4869035fa"
	I0311 13:55:03.150191    4147 logs.go:123] Gathering logs for kube-controller-manager [b61bca5e426d] ...
	I0311 13:55:03.150205    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b61bca5e426d"
	I0311 13:55:03.167173    4147 logs.go:123] Gathering logs for etcd [77caab578172] ...
	I0311 13:55:03.167184    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77caab578172"
	I0311 13:55:03.183984    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:55:03.183997    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:55:03.196292    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:55:03.196303    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:55:03.233000    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:55:03.233012    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:55:03.237041    4147 logs.go:123] Gathering logs for kube-proxy [66941007e29d] ...
	I0311 13:55:03.237048    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66941007e29d"
	I0311 13:55:03.250599    4147 logs.go:123] Gathering logs for etcd [032c1ac7c6a6] ...
	I0311 13:55:03.250612    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032c1ac7c6a6"
	I0311 13:55:03.264766    4147 logs.go:123] Gathering logs for kube-scheduler [4151959a3f28] ...
	I0311 13:55:03.264777    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4151959a3f28"
	I0311 13:55:05.778521    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:55:10.781155    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:55:10.781649    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:55:10.819668    4147 logs.go:276] 2 containers: [13edbf8d8a09 79e4869035fa]
	I0311 13:55:10.819809    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:55:10.840099    4147 logs.go:276] 2 containers: [032c1ac7c6a6 77caab578172]
	I0311 13:55:10.840212    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:55:10.855468    4147 logs.go:276] 1 containers: [ae741f62ad7c]
	I0311 13:55:10.855545    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:55:10.867563    4147 logs.go:276] 2 containers: [4151959a3f28 d64c5c8a52ca]
	I0311 13:55:10.867638    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:55:10.877923    4147 logs.go:276] 1 containers: [66941007e29d]
	I0311 13:55:10.877989    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:55:10.889056    4147 logs.go:276] 2 containers: [b61bca5e426d d2bff02933cb]
	I0311 13:55:10.889119    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:55:10.904366    4147 logs.go:276] 0 containers: []
	W0311 13:55:10.904377    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:55:10.904438    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:55:10.914957    4147 logs.go:276] 1 containers: [5aa3331a9bc9]
	I0311 13:55:10.914977    4147 logs.go:123] Gathering logs for coredns [ae741f62ad7c] ...
	I0311 13:55:10.914982    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae741f62ad7c"
	I0311 13:55:10.926028    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:55:10.926039    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:55:10.938219    4147 logs.go:123] Gathering logs for kube-controller-manager [d2bff02933cb] ...
	I0311 13:55:10.938229    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bff02933cb"
	I0311 13:55:10.953643    4147 logs.go:123] Gathering logs for kube-apiserver [79e4869035fa] ...
	I0311 13:55:10.953653    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79e4869035fa"
	I0311 13:55:10.992995    4147 logs.go:123] Gathering logs for etcd [032c1ac7c6a6] ...
	I0311 13:55:10.993009    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032c1ac7c6a6"
	I0311 13:55:11.007225    4147 logs.go:123] Gathering logs for kube-proxy [66941007e29d] ...
	I0311 13:55:11.007235    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66941007e29d"
	I0311 13:55:11.019359    4147 logs.go:123] Gathering logs for kube-controller-manager [b61bca5e426d] ...
	I0311 13:55:11.019375    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b61bca5e426d"
	I0311 13:55:11.047885    4147 logs.go:123] Gathering logs for kube-scheduler [d64c5c8a52ca] ...
	I0311 13:55:11.047897    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64c5c8a52ca"
	I0311 13:55:11.063309    4147 logs.go:123] Gathering logs for storage-provisioner [5aa3331a9bc9] ...
	I0311 13:55:11.063319    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa3331a9bc9"
	I0311 13:55:11.075140    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:55:11.075150    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:55:11.099366    4147 logs.go:123] Gathering logs for etcd [77caab578172] ...
	I0311 13:55:11.099372    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77caab578172"
	I0311 13:55:11.113720    4147 logs.go:123] Gathering logs for kube-scheduler [4151959a3f28] ...
	I0311 13:55:11.113731    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4151959a3f28"
	I0311 13:55:11.125361    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:55:11.125373    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:55:11.161993    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:55:11.162002    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:55:11.166387    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:55:11.166396    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:55:11.207213    4147 logs.go:123] Gathering logs for kube-apiserver [13edbf8d8a09] ...
	I0311 13:55:11.207226    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13edbf8d8a09"
	I0311 13:55:13.723408    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:55:18.725720    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:55:18.726113    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:55:18.760333    4147 logs.go:276] 2 containers: [13edbf8d8a09 79e4869035fa]
	I0311 13:55:18.760456    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:55:18.781297    4147 logs.go:276] 2 containers: [032c1ac7c6a6 77caab578172]
	I0311 13:55:18.781393    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:55:18.797294    4147 logs.go:276] 1 containers: [ae741f62ad7c]
	I0311 13:55:18.797366    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:55:18.811819    4147 logs.go:276] 2 containers: [4151959a3f28 d64c5c8a52ca]
	I0311 13:55:18.811898    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:55:18.821986    4147 logs.go:276] 1 containers: [66941007e29d]
	I0311 13:55:18.822052    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:55:18.832311    4147 logs.go:276] 2 containers: [b61bca5e426d d2bff02933cb]
	I0311 13:55:18.832386    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:55:18.842849    4147 logs.go:276] 0 containers: []
	W0311 13:55:18.842864    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:55:18.842923    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:55:18.855353    4147 logs.go:276] 1 containers: [5aa3331a9bc9]
	I0311 13:55:18.855372    4147 logs.go:123] Gathering logs for coredns [ae741f62ad7c] ...
	I0311 13:55:18.855378    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae741f62ad7c"
	I0311 13:55:18.866255    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:55:18.866265    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:55:18.878247    4147 logs.go:123] Gathering logs for kube-scheduler [4151959a3f28] ...
	I0311 13:55:18.878261    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4151959a3f28"
	I0311 13:55:18.891945    4147 logs.go:123] Gathering logs for kube-scheduler [d64c5c8a52ca] ...
	I0311 13:55:18.891957    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64c5c8a52ca"
	I0311 13:55:18.911343    4147 logs.go:123] Gathering logs for storage-provisioner [5aa3331a9bc9] ...
	I0311 13:55:18.911353    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa3331a9bc9"
	I0311 13:55:18.926715    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:55:18.926726    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:55:18.931050    4147 logs.go:123] Gathering logs for etcd [032c1ac7c6a6] ...
	I0311 13:55:18.931055    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032c1ac7c6a6"
	I0311 13:55:18.945241    4147 logs.go:123] Gathering logs for etcd [77caab578172] ...
	I0311 13:55:18.945251    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77caab578172"
	I0311 13:55:18.960132    4147 logs.go:123] Gathering logs for kube-apiserver [79e4869035fa] ...
	I0311 13:55:18.960143    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79e4869035fa"
	I0311 13:55:18.999531    4147 logs.go:123] Gathering logs for kube-proxy [66941007e29d] ...
	I0311 13:55:18.999543    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66941007e29d"
	I0311 13:55:19.011371    4147 logs.go:123] Gathering logs for kube-controller-manager [d2bff02933cb] ...
	I0311 13:55:19.011381    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bff02933cb"
	I0311 13:55:19.026464    4147 logs.go:123] Gathering logs for kube-controller-manager [b61bca5e426d] ...
	I0311 13:55:19.026477    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b61bca5e426d"
	I0311 13:55:19.044028    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:55:19.044038    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:55:19.067188    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:55:19.067197    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:55:19.103612    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:55:19.103623    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:55:19.139329    4147 logs.go:123] Gathering logs for kube-apiserver [13edbf8d8a09] ...
	I0311 13:55:19.139340    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13edbf8d8a09"
	I0311 13:55:21.655750    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:55:26.658030    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:55:26.658294    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:55:26.685951    4147 logs.go:276] 2 containers: [13edbf8d8a09 79e4869035fa]
	I0311 13:55:26.686080    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:55:26.701210    4147 logs.go:276] 2 containers: [032c1ac7c6a6 77caab578172]
	I0311 13:55:26.701289    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:55:26.713950    4147 logs.go:276] 1 containers: [ae741f62ad7c]
	I0311 13:55:26.714022    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:55:26.724633    4147 logs.go:276] 2 containers: [4151959a3f28 d64c5c8a52ca]
	I0311 13:55:26.724702    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:55:26.734967    4147 logs.go:276] 1 containers: [66941007e29d]
	I0311 13:55:26.735031    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:55:26.745851    4147 logs.go:276] 2 containers: [b61bca5e426d d2bff02933cb]
	I0311 13:55:26.745921    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:55:26.760012    4147 logs.go:276] 0 containers: []
	W0311 13:55:26.760024    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:55:26.760079    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:55:26.770492    4147 logs.go:276] 1 containers: [5aa3331a9bc9]
	I0311 13:55:26.770509    4147 logs.go:123] Gathering logs for kube-apiserver [13edbf8d8a09] ...
	I0311 13:55:26.770514    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13edbf8d8a09"
	I0311 13:55:26.784389    4147 logs.go:123] Gathering logs for coredns [ae741f62ad7c] ...
	I0311 13:55:26.784402    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae741f62ad7c"
	I0311 13:55:26.795241    4147 logs.go:123] Gathering logs for kube-controller-manager [b61bca5e426d] ...
	I0311 13:55:26.795253    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b61bca5e426d"
	I0311 13:55:26.814623    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:55:26.814631    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:55:26.837639    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:55:26.837646    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:55:26.874725    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:55:26.874737    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:55:26.878909    4147 logs.go:123] Gathering logs for storage-provisioner [5aa3331a9bc9] ...
	I0311 13:55:26.878915    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa3331a9bc9"
	I0311 13:55:26.891416    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:55:26.891429    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:55:26.903330    4147 logs.go:123] Gathering logs for kube-controller-manager [d2bff02933cb] ...
	I0311 13:55:26.903342    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bff02933cb"
	I0311 13:55:26.918433    4147 logs.go:123] Gathering logs for kube-apiserver [79e4869035fa] ...
	I0311 13:55:26.918443    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79e4869035fa"
	I0311 13:55:26.957732    4147 logs.go:123] Gathering logs for etcd [032c1ac7c6a6] ...
	I0311 13:55:26.957742    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032c1ac7c6a6"
	I0311 13:55:26.974353    4147 logs.go:123] Gathering logs for kube-scheduler [4151959a3f28] ...
	I0311 13:55:26.974364    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4151959a3f28"
	I0311 13:55:26.986034    4147 logs.go:123] Gathering logs for kube-scheduler [d64c5c8a52ca] ...
	I0311 13:55:26.986046    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64c5c8a52ca"
	I0311 13:55:27.001704    4147 logs.go:123] Gathering logs for kube-proxy [66941007e29d] ...
	I0311 13:55:27.001714    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66941007e29d"
	I0311 13:55:27.019370    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:55:27.019383    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:55:27.053471    4147 logs.go:123] Gathering logs for etcd [77caab578172] ...
	I0311 13:55:27.053485    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77caab578172"
	I0311 13:55:29.570650    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:55:34.572775    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:55:34.572954    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:55:34.591335    4147 logs.go:276] 2 containers: [13edbf8d8a09 79e4869035fa]
	I0311 13:55:34.591433    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:55:34.605426    4147 logs.go:276] 2 containers: [032c1ac7c6a6 77caab578172]
	I0311 13:55:34.605499    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:55:34.617674    4147 logs.go:276] 1 containers: [ae741f62ad7c]
	I0311 13:55:34.617738    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:55:34.628388    4147 logs.go:276] 2 containers: [4151959a3f28 d64c5c8a52ca]
	I0311 13:55:34.628460    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:55:34.642558    4147 logs.go:276] 1 containers: [66941007e29d]
	I0311 13:55:34.642626    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:55:34.653517    4147 logs.go:276] 2 containers: [b61bca5e426d d2bff02933cb]
	I0311 13:55:34.653589    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:55:34.664372    4147 logs.go:276] 0 containers: []
	W0311 13:55:34.664385    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:55:34.664446    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:55:34.679207    4147 logs.go:276] 1 containers: [5aa3331a9bc9]
	I0311 13:55:34.679223    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:55:34.679228    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:55:34.716892    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:55:34.716901    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:55:34.721271    4147 logs.go:123] Gathering logs for kube-apiserver [13edbf8d8a09] ...
	I0311 13:55:34.721279    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13edbf8d8a09"
	I0311 13:55:34.735000    4147 logs.go:123] Gathering logs for kube-controller-manager [d2bff02933cb] ...
	I0311 13:55:34.735012    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bff02933cb"
	I0311 13:55:34.749783    4147 logs.go:123] Gathering logs for etcd [77caab578172] ...
	I0311 13:55:34.749794    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77caab578172"
	I0311 13:55:34.764395    4147 logs.go:123] Gathering logs for kube-proxy [66941007e29d] ...
	I0311 13:55:34.764404    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66941007e29d"
	I0311 13:55:34.778875    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:55:34.778885    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:55:34.801578    4147 logs.go:123] Gathering logs for kube-apiserver [79e4869035fa] ...
	I0311 13:55:34.801584    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79e4869035fa"
	I0311 13:55:34.838270    4147 logs.go:123] Gathering logs for coredns [ae741f62ad7c] ...
	I0311 13:55:34.838283    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae741f62ad7c"
	I0311 13:55:34.849294    4147 logs.go:123] Gathering logs for kube-controller-manager [b61bca5e426d] ...
	I0311 13:55:34.849306    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b61bca5e426d"
	I0311 13:55:34.872386    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:55:34.872399    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:55:34.884114    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:55:34.884127    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:55:34.921702    4147 logs.go:123] Gathering logs for etcd [032c1ac7c6a6] ...
	I0311 13:55:34.921715    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032c1ac7c6a6"
	I0311 13:55:34.936024    4147 logs.go:123] Gathering logs for kube-scheduler [4151959a3f28] ...
	I0311 13:55:34.936036    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4151959a3f28"
	I0311 13:55:34.948358    4147 logs.go:123] Gathering logs for kube-scheduler [d64c5c8a52ca] ...
	I0311 13:55:34.948372    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64c5c8a52ca"
	I0311 13:55:34.963860    4147 logs.go:123] Gathering logs for storage-provisioner [5aa3331a9bc9] ...
	I0311 13:55:34.963869    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa3331a9bc9"
	I0311 13:55:37.477405    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:55:42.479831    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:55:42.480080    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:55:42.505802    4147 logs.go:276] 2 containers: [13edbf8d8a09 79e4869035fa]
	I0311 13:55:42.505938    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:55:42.522282    4147 logs.go:276] 2 containers: [032c1ac7c6a6 77caab578172]
	I0311 13:55:42.522373    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:55:42.535613    4147 logs.go:276] 1 containers: [ae741f62ad7c]
	I0311 13:55:42.535691    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:55:42.548115    4147 logs.go:276] 2 containers: [4151959a3f28 d64c5c8a52ca]
	I0311 13:55:42.548185    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:55:42.558570    4147 logs.go:276] 1 containers: [66941007e29d]
	I0311 13:55:42.558640    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:55:42.569003    4147 logs.go:276] 2 containers: [b61bca5e426d d2bff02933cb]
	I0311 13:55:42.569085    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:55:42.578996    4147 logs.go:276] 0 containers: []
	W0311 13:55:42.579007    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:55:42.579060    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:55:42.588979    4147 logs.go:276] 1 containers: [5aa3331a9bc9]
	I0311 13:55:42.588994    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:55:42.588999    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:55:42.593271    4147 logs.go:123] Gathering logs for kube-proxy [66941007e29d] ...
	I0311 13:55:42.593281    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66941007e29d"
	I0311 13:55:42.610164    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:55:42.610173    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:55:42.634843    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:55:42.634855    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:55:42.679452    4147 logs.go:123] Gathering logs for storage-provisioner [5aa3331a9bc9] ...
	I0311 13:55:42.679466    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa3331a9bc9"
	I0311 13:55:42.691639    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:55:42.691655    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:55:42.703868    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:55:42.703884    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:55:42.743159    4147 logs.go:123] Gathering logs for coredns [ae741f62ad7c] ...
	I0311 13:55:42.743187    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae741f62ad7c"
	I0311 13:55:42.754099    4147 logs.go:123] Gathering logs for kube-scheduler [4151959a3f28] ...
	I0311 13:55:42.754110    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4151959a3f28"
	I0311 13:55:42.767315    4147 logs.go:123] Gathering logs for kube-controller-manager [b61bca5e426d] ...
	I0311 13:55:42.767325    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b61bca5e426d"
	I0311 13:55:42.791464    4147 logs.go:123] Gathering logs for kube-scheduler [d64c5c8a52ca] ...
	I0311 13:55:42.791476    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64c5c8a52ca"
	I0311 13:55:42.806721    4147 logs.go:123] Gathering logs for kube-controller-manager [d2bff02933cb] ...
	I0311 13:55:42.806733    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bff02933cb"
	I0311 13:55:42.821371    4147 logs.go:123] Gathering logs for kube-apiserver [13edbf8d8a09] ...
	I0311 13:55:42.821383    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13edbf8d8a09"
	I0311 13:55:42.835588    4147 logs.go:123] Gathering logs for kube-apiserver [79e4869035fa] ...
	I0311 13:55:42.835602    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79e4869035fa"
	I0311 13:55:42.878096    4147 logs.go:123] Gathering logs for etcd [032c1ac7c6a6] ...
	I0311 13:55:42.878116    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032c1ac7c6a6"
	I0311 13:55:42.894550    4147 logs.go:123] Gathering logs for etcd [77caab578172] ...
	I0311 13:55:42.894563    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77caab578172"
	I0311 13:55:45.411523    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:55:50.413838    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:55:50.414072    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:55:50.439276    4147 logs.go:276] 2 containers: [13edbf8d8a09 79e4869035fa]
	I0311 13:55:50.439393    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:55:50.455281    4147 logs.go:276] 2 containers: [032c1ac7c6a6 77caab578172]
	I0311 13:55:50.455369    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:55:50.468091    4147 logs.go:276] 1 containers: [ae741f62ad7c]
	I0311 13:55:50.468167    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:55:50.482999    4147 logs.go:276] 2 containers: [4151959a3f28 d64c5c8a52ca]
	I0311 13:55:50.483066    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:55:50.494021    4147 logs.go:276] 1 containers: [66941007e29d]
	I0311 13:55:50.494087    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:55:50.505163    4147 logs.go:276] 2 containers: [b61bca5e426d d2bff02933cb]
	I0311 13:55:50.505238    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:55:50.515478    4147 logs.go:276] 0 containers: []
	W0311 13:55:50.515493    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:55:50.515551    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:55:50.527267    4147 logs.go:276] 1 containers: [5aa3331a9bc9]
	I0311 13:55:50.527284    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:55:50.527291    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:55:50.570457    4147 logs.go:123] Gathering logs for kube-apiserver [79e4869035fa] ...
	I0311 13:55:50.570470    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79e4869035fa"
	I0311 13:55:50.621948    4147 logs.go:123] Gathering logs for etcd [032c1ac7c6a6] ...
	I0311 13:55:50.621958    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032c1ac7c6a6"
	I0311 13:55:50.635590    4147 logs.go:123] Gathering logs for kube-proxy [66941007e29d] ...
	I0311 13:55:50.635600    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66941007e29d"
	I0311 13:55:50.647744    4147 logs.go:123] Gathering logs for kube-controller-manager [b61bca5e426d] ...
	I0311 13:55:50.647755    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b61bca5e426d"
	I0311 13:55:50.668794    4147 logs.go:123] Gathering logs for kube-controller-manager [d2bff02933cb] ...
	I0311 13:55:50.668804    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bff02933cb"
	I0311 13:55:50.683843    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:55:50.683853    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:55:50.708843    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:55:50.708854    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:55:50.747440    4147 logs.go:123] Gathering logs for storage-provisioner [5aa3331a9bc9] ...
	I0311 13:55:50.747449    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa3331a9bc9"
	I0311 13:55:50.758737    4147 logs.go:123] Gathering logs for kube-apiserver [13edbf8d8a09] ...
	I0311 13:55:50.758746    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13edbf8d8a09"
	I0311 13:55:50.772590    4147 logs.go:123] Gathering logs for coredns [ae741f62ad7c] ...
	I0311 13:55:50.772599    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae741f62ad7c"
	I0311 13:55:50.783367    4147 logs.go:123] Gathering logs for kube-scheduler [4151959a3f28] ...
	I0311 13:55:50.783381    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4151959a3f28"
	I0311 13:55:50.795422    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:55:50.795434    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:55:50.799586    4147 logs.go:123] Gathering logs for etcd [77caab578172] ...
	I0311 13:55:50.799593    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77caab578172"
	I0311 13:55:50.813798    4147 logs.go:123] Gathering logs for kube-scheduler [d64c5c8a52ca] ...
	I0311 13:55:50.813812    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64c5c8a52ca"
	I0311 13:55:50.829286    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:55:50.829296    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:55:53.342907    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:55:58.345151    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:55:58.345321    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:55:58.357421    4147 logs.go:276] 2 containers: [13edbf8d8a09 79e4869035fa]
	I0311 13:55:58.357493    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:55:58.375939    4147 logs.go:276] 2 containers: [032c1ac7c6a6 77caab578172]
	I0311 13:55:58.376053    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:55:58.388683    4147 logs.go:276] 1 containers: [ae741f62ad7c]
	I0311 13:55:58.388760    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:55:58.401480    4147 logs.go:276] 2 containers: [4151959a3f28 d64c5c8a52ca]
	I0311 13:55:58.401568    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:55:58.420094    4147 logs.go:276] 1 containers: [66941007e29d]
	I0311 13:55:58.420176    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:55:58.434913    4147 logs.go:276] 2 containers: [b61bca5e426d d2bff02933cb]
	I0311 13:55:58.434987    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:55:58.445163    4147 logs.go:276] 0 containers: []
	W0311 13:55:58.445186    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:55:58.445248    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:55:58.455486    4147 logs.go:276] 1 containers: [5aa3331a9bc9]
	I0311 13:55:58.455503    4147 logs.go:123] Gathering logs for kube-controller-manager [b61bca5e426d] ...
	I0311 13:55:58.455509    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b61bca5e426d"
	I0311 13:55:58.472767    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:55:58.472778    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:55:58.477271    4147 logs.go:123] Gathering logs for kube-controller-manager [d2bff02933cb] ...
	I0311 13:55:58.477278    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bff02933cb"
	I0311 13:55:58.492444    4147 logs.go:123] Gathering logs for etcd [77caab578172] ...
	I0311 13:55:58.492455    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77caab578172"
	I0311 13:55:58.509706    4147 logs.go:123] Gathering logs for coredns [ae741f62ad7c] ...
	I0311 13:55:58.509716    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae741f62ad7c"
	I0311 13:55:58.521421    4147 logs.go:123] Gathering logs for kube-scheduler [4151959a3f28] ...
	I0311 13:55:58.521437    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4151959a3f28"
	I0311 13:55:58.533244    4147 logs.go:123] Gathering logs for kube-scheduler [d64c5c8a52ca] ...
	I0311 13:55:58.533254    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64c5c8a52ca"
	I0311 13:55:58.548468    4147 logs.go:123] Gathering logs for storage-provisioner [5aa3331a9bc9] ...
	I0311 13:55:58.548477    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa3331a9bc9"
	I0311 13:55:58.560442    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:55:58.560453    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:55:58.599432    4147 logs.go:123] Gathering logs for kube-apiserver [13edbf8d8a09] ...
	I0311 13:55:58.599446    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13edbf8d8a09"
	I0311 13:55:58.613197    4147 logs.go:123] Gathering logs for kube-apiserver [79e4869035fa] ...
	I0311 13:55:58.613209    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79e4869035fa"
	I0311 13:55:58.650296    4147 logs.go:123] Gathering logs for etcd [032c1ac7c6a6] ...
	I0311 13:55:58.650308    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032c1ac7c6a6"
	I0311 13:55:58.669825    4147 logs.go:123] Gathering logs for kube-proxy [66941007e29d] ...
	I0311 13:55:58.669835    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66941007e29d"
	I0311 13:55:58.681615    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:55:58.681627    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:55:58.706321    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:55:58.706331    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:55:58.717733    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:55:58.717744    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:56:01.253631    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:56:06.255016    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:56:06.255417    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:56:06.299836    4147 logs.go:276] 2 containers: [13edbf8d8a09 79e4869035fa]
	I0311 13:56:06.299986    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:56:06.321276    4147 logs.go:276] 2 containers: [032c1ac7c6a6 77caab578172]
	I0311 13:56:06.321390    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:56:06.336296    4147 logs.go:276] 1 containers: [ae741f62ad7c]
	I0311 13:56:06.336365    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:56:06.352043    4147 logs.go:276] 2 containers: [4151959a3f28 d64c5c8a52ca]
	I0311 13:56:06.352118    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:56:06.362404    4147 logs.go:276] 1 containers: [66941007e29d]
	I0311 13:56:06.362470    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:56:06.374257    4147 logs.go:276] 2 containers: [b61bca5e426d d2bff02933cb]
	I0311 13:56:06.374335    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:56:06.384760    4147 logs.go:276] 0 containers: []
	W0311 13:56:06.384773    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:56:06.384859    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:56:06.395675    4147 logs.go:276] 1 containers: [5aa3331a9bc9]
	I0311 13:56:06.395693    4147 logs.go:123] Gathering logs for etcd [032c1ac7c6a6] ...
	I0311 13:56:06.395700    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032c1ac7c6a6"
	I0311 13:56:06.409534    4147 logs.go:123] Gathering logs for storage-provisioner [5aa3331a9bc9] ...
	I0311 13:56:06.409547    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa3331a9bc9"
	I0311 13:56:06.423261    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:56:06.423271    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:56:06.460924    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:56:06.460933    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:56:06.465587    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:56:06.465597    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:56:06.507623    4147 logs.go:123] Gathering logs for kube-apiserver [13edbf8d8a09] ...
	I0311 13:56:06.507636    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13edbf8d8a09"
	I0311 13:56:06.526433    4147 logs.go:123] Gathering logs for etcd [77caab578172] ...
	I0311 13:56:06.526445    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77caab578172"
	I0311 13:56:06.541927    4147 logs.go:123] Gathering logs for coredns [ae741f62ad7c] ...
	I0311 13:56:06.541941    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae741f62ad7c"
	I0311 13:56:06.552845    4147 logs.go:123] Gathering logs for kube-controller-manager [b61bca5e426d] ...
	I0311 13:56:06.552860    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b61bca5e426d"
	I0311 13:56:06.569948    4147 logs.go:123] Gathering logs for kube-scheduler [4151959a3f28] ...
	I0311 13:56:06.569958    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4151959a3f28"
	I0311 13:56:06.581918    4147 logs.go:123] Gathering logs for kube-proxy [66941007e29d] ...
	I0311 13:56:06.581929    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66941007e29d"
	I0311 13:56:06.594455    4147 logs.go:123] Gathering logs for kube-controller-manager [d2bff02933cb] ...
	I0311 13:56:06.594467    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bff02933cb"
	I0311 13:56:06.609463    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:56:06.609472    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:56:06.633913    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:56:06.633923    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:56:06.645729    4147 logs.go:123] Gathering logs for kube-apiserver [79e4869035fa] ...
	I0311 13:56:06.645739    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79e4869035fa"
	I0311 13:56:06.683375    4147 logs.go:123] Gathering logs for kube-scheduler [d64c5c8a52ca] ...
	I0311 13:56:06.683388    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64c5c8a52ca"
	I0311 13:56:09.199463    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:56:14.201953    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:56:14.202333    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:56:14.243165    4147 logs.go:276] 2 containers: [13edbf8d8a09 79e4869035fa]
	I0311 13:56:14.243300    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:56:14.265437    4147 logs.go:276] 2 containers: [032c1ac7c6a6 77caab578172]
	I0311 13:56:14.265516    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:56:14.278668    4147 logs.go:276] 1 containers: [ae741f62ad7c]
	I0311 13:56:14.278733    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:56:14.290170    4147 logs.go:276] 2 containers: [4151959a3f28 d64c5c8a52ca]
	I0311 13:56:14.290245    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:56:14.300947    4147 logs.go:276] 1 containers: [66941007e29d]
	I0311 13:56:14.301020    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:56:14.311345    4147 logs.go:276] 2 containers: [b61bca5e426d d2bff02933cb]
	I0311 13:56:14.311418    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:56:14.325549    4147 logs.go:276] 0 containers: []
	W0311 13:56:14.325561    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:56:14.325621    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:56:14.338232    4147 logs.go:276] 1 containers: [5aa3331a9bc9]
	I0311 13:56:14.338249    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:56:14.338255    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:56:14.376026    4147 logs.go:123] Gathering logs for etcd [032c1ac7c6a6] ...
	I0311 13:56:14.376039    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032c1ac7c6a6"
	I0311 13:56:14.390430    4147 logs.go:123] Gathering logs for coredns [ae741f62ad7c] ...
	I0311 13:56:14.390444    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae741f62ad7c"
	I0311 13:56:14.401724    4147 logs.go:123] Gathering logs for storage-provisioner [5aa3331a9bc9] ...
	I0311 13:56:14.401736    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa3331a9bc9"
	I0311 13:56:14.413118    4147 logs.go:123] Gathering logs for kube-apiserver [13edbf8d8a09] ...
	I0311 13:56:14.413130    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13edbf8d8a09"
	I0311 13:56:14.426609    4147 logs.go:123] Gathering logs for etcd [77caab578172] ...
	I0311 13:56:14.426619    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77caab578172"
	I0311 13:56:14.440920    4147 logs.go:123] Gathering logs for kube-proxy [66941007e29d] ...
	I0311 13:56:14.440930    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66941007e29d"
	I0311 13:56:14.452869    4147 logs.go:123] Gathering logs for kube-controller-manager [b61bca5e426d] ...
	I0311 13:56:14.452879    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b61bca5e426d"
	I0311 13:56:14.470312    4147 logs.go:123] Gathering logs for kube-apiserver [79e4869035fa] ...
	I0311 13:56:14.470322    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79e4869035fa"
	I0311 13:56:14.513053    4147 logs.go:123] Gathering logs for kube-scheduler [d64c5c8a52ca] ...
	I0311 13:56:14.513073    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64c5c8a52ca"
	I0311 13:56:14.530473    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:56:14.530484    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:56:14.555261    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:56:14.555272    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:56:14.566931    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:56:14.566943    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:56:14.570821    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:56:14.570826    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:56:14.606710    4147 logs.go:123] Gathering logs for kube-scheduler [4151959a3f28] ...
	I0311 13:56:14.606721    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4151959a3f28"
	I0311 13:56:14.619281    4147 logs.go:123] Gathering logs for kube-controller-manager [d2bff02933cb] ...
	I0311 13:56:14.619291    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bff02933cb"
	I0311 13:56:17.134724    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:56:22.136946    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:56:22.137181    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:56:22.155847    4147 logs.go:276] 2 containers: [13edbf8d8a09 79e4869035fa]
	I0311 13:56:22.155945    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:56:22.169279    4147 logs.go:276] 2 containers: [032c1ac7c6a6 77caab578172]
	I0311 13:56:22.169355    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:56:22.181021    4147 logs.go:276] 1 containers: [ae741f62ad7c]
	I0311 13:56:22.181088    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:56:22.192539    4147 logs.go:276] 2 containers: [4151959a3f28 d64c5c8a52ca]
	I0311 13:56:22.192610    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:56:22.203352    4147 logs.go:276] 1 containers: [66941007e29d]
	I0311 13:56:22.203421    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:56:22.213565    4147 logs.go:276] 2 containers: [b61bca5e426d d2bff02933cb]
	I0311 13:56:22.213635    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:56:22.224283    4147 logs.go:276] 0 containers: []
	W0311 13:56:22.224295    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:56:22.224355    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:56:22.238721    4147 logs.go:276] 1 containers: [5aa3331a9bc9]
	I0311 13:56:22.238736    4147 logs.go:123] Gathering logs for etcd [032c1ac7c6a6] ...
	I0311 13:56:22.238742    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032c1ac7c6a6"
	I0311 13:56:22.252064    4147 logs.go:123] Gathering logs for etcd [77caab578172] ...
	I0311 13:56:22.252075    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77caab578172"
	I0311 13:56:22.265973    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:56:22.265982    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:56:22.278016    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:56:22.278027    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:56:22.300341    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:56:22.300347    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:56:22.337291    4147 logs.go:123] Gathering logs for kube-controller-manager [b61bca5e426d] ...
	I0311 13:56:22.337300    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b61bca5e426d"
	I0311 13:56:22.354190    4147 logs.go:123] Gathering logs for storage-provisioner [5aa3331a9bc9] ...
	I0311 13:56:22.354201    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa3331a9bc9"
	I0311 13:56:22.365271    4147 logs.go:123] Gathering logs for coredns [ae741f62ad7c] ...
	I0311 13:56:22.365280    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae741f62ad7c"
	I0311 13:56:22.378726    4147 logs.go:123] Gathering logs for kube-scheduler [d64c5c8a52ca] ...
	I0311 13:56:22.378738    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64c5c8a52ca"
	I0311 13:56:22.393785    4147 logs.go:123] Gathering logs for kube-proxy [66941007e29d] ...
	I0311 13:56:22.393794    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66941007e29d"
	I0311 13:56:22.405707    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:56:22.405720    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:56:22.409695    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:56:22.409701    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:56:22.448029    4147 logs.go:123] Gathering logs for kube-apiserver [79e4869035fa] ...
	I0311 13:56:22.448040    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79e4869035fa"
	I0311 13:56:22.486667    4147 logs.go:123] Gathering logs for kube-apiserver [13edbf8d8a09] ...
	I0311 13:56:22.486679    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13edbf8d8a09"
	I0311 13:56:22.500637    4147 logs.go:123] Gathering logs for kube-scheduler [4151959a3f28] ...
	I0311 13:56:22.500647    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4151959a3f28"
	I0311 13:56:22.519647    4147 logs.go:123] Gathering logs for kube-controller-manager [d2bff02933cb] ...
	I0311 13:56:22.519664    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bff02933cb"
	I0311 13:56:25.041256    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:56:30.043559    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:56:30.043760    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:56:30.058493    4147 logs.go:276] 2 containers: [13edbf8d8a09 79e4869035fa]
	I0311 13:56:30.058573    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:56:30.069904    4147 logs.go:276] 2 containers: [032c1ac7c6a6 77caab578172]
	I0311 13:56:30.069977    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:56:30.080876    4147 logs.go:276] 1 containers: [ae741f62ad7c]
	I0311 13:56:30.080974    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:56:30.091778    4147 logs.go:276] 2 containers: [4151959a3f28 d64c5c8a52ca]
	I0311 13:56:30.091858    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:56:30.102253    4147 logs.go:276] 1 containers: [66941007e29d]
	I0311 13:56:30.102321    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:56:30.113126    4147 logs.go:276] 2 containers: [b61bca5e426d d2bff02933cb]
	I0311 13:56:30.113199    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:56:30.123227    4147 logs.go:276] 0 containers: []
	W0311 13:56:30.123239    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:56:30.123307    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:56:30.134488    4147 logs.go:276] 1 containers: [5aa3331a9bc9]
	I0311 13:56:30.134506    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:56:30.134513    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:56:30.172899    4147 logs.go:123] Gathering logs for kube-apiserver [79e4869035fa] ...
	I0311 13:56:30.172908    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79e4869035fa"
	I0311 13:56:30.208992    4147 logs.go:123] Gathering logs for etcd [77caab578172] ...
	I0311 13:56:30.209004    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77caab578172"
	I0311 13:56:30.223205    4147 logs.go:123] Gathering logs for kube-scheduler [4151959a3f28] ...
	I0311 13:56:30.223216    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4151959a3f28"
	I0311 13:56:30.242690    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:56:30.242702    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:56:30.246812    4147 logs.go:123] Gathering logs for etcd [032c1ac7c6a6] ...
	I0311 13:56:30.246818    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032c1ac7c6a6"
	I0311 13:56:30.260651    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:56:30.260664    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:56:30.295294    4147 logs.go:123] Gathering logs for kube-apiserver [13edbf8d8a09] ...
	I0311 13:56:30.295305    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13edbf8d8a09"
	I0311 13:56:30.309352    4147 logs.go:123] Gathering logs for kube-controller-manager [b61bca5e426d] ...
	I0311 13:56:30.309364    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b61bca5e426d"
	I0311 13:56:30.329147    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:56:30.329158    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:56:30.354281    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:56:30.354288    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:56:30.365505    4147 logs.go:123] Gathering logs for coredns [ae741f62ad7c] ...
	I0311 13:56:30.365515    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae741f62ad7c"
	I0311 13:56:30.377585    4147 logs.go:123] Gathering logs for kube-scheduler [d64c5c8a52ca] ...
	I0311 13:56:30.377598    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64c5c8a52ca"
	I0311 13:56:30.392644    4147 logs.go:123] Gathering logs for kube-proxy [66941007e29d] ...
	I0311 13:56:30.392655    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66941007e29d"
	I0311 13:56:30.404893    4147 logs.go:123] Gathering logs for kube-controller-manager [d2bff02933cb] ...
	I0311 13:56:30.404903    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bff02933cb"
	I0311 13:56:30.419336    4147 logs.go:123] Gathering logs for storage-provisioner [5aa3331a9bc9] ...
	I0311 13:56:30.419347    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa3331a9bc9"
	I0311 13:56:32.932767    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:56:37.935239    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:56:37.935699    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:56:37.980871    4147 logs.go:276] 2 containers: [13edbf8d8a09 79e4869035fa]
	I0311 13:56:37.981014    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:56:38.001875    4147 logs.go:276] 2 containers: [032c1ac7c6a6 77caab578172]
	I0311 13:56:38.001992    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:56:38.019726    4147 logs.go:276] 1 containers: [ae741f62ad7c]
	I0311 13:56:38.019803    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:56:38.032056    4147 logs.go:276] 2 containers: [4151959a3f28 d64c5c8a52ca]
	I0311 13:56:38.032140    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:56:38.043324    4147 logs.go:276] 1 containers: [66941007e29d]
	I0311 13:56:38.043399    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:56:38.054404    4147 logs.go:276] 2 containers: [b61bca5e426d d2bff02933cb]
	I0311 13:56:38.054472    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:56:38.064590    4147 logs.go:276] 0 containers: []
	W0311 13:56:38.064601    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:56:38.064661    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:56:38.075500    4147 logs.go:276] 1 containers: [5aa3331a9bc9]
	I0311 13:56:38.075517    4147 logs.go:123] Gathering logs for kube-controller-manager [b61bca5e426d] ...
	I0311 13:56:38.075523    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b61bca5e426d"
	I0311 13:56:38.093431    4147 logs.go:123] Gathering logs for kube-apiserver [13edbf8d8a09] ...
	I0311 13:56:38.093445    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13edbf8d8a09"
	I0311 13:56:38.107190    4147 logs.go:123] Gathering logs for etcd [032c1ac7c6a6] ...
	I0311 13:56:38.107199    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032c1ac7c6a6"
	I0311 13:56:38.123110    4147 logs.go:123] Gathering logs for coredns [ae741f62ad7c] ...
	I0311 13:56:38.123120    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae741f62ad7c"
	I0311 13:56:38.134225    4147 logs.go:123] Gathering logs for kube-scheduler [4151959a3f28] ...
	I0311 13:56:38.134237    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4151959a3f28"
	I0311 13:56:38.146019    4147 logs.go:123] Gathering logs for kube-proxy [66941007e29d] ...
	I0311 13:56:38.146030    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66941007e29d"
	I0311 13:56:38.157938    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:56:38.157948    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:56:38.193874    4147 logs.go:123] Gathering logs for storage-provisioner [5aa3331a9bc9] ...
	I0311 13:56:38.193885    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa3331a9bc9"
	I0311 13:56:38.205550    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:56:38.205562    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:56:38.243569    4147 logs.go:123] Gathering logs for kube-apiserver [79e4869035fa] ...
	I0311 13:56:38.243578    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79e4869035fa"
	I0311 13:56:38.281865    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:56:38.281874    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:56:38.305711    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:56:38.305720    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:56:38.317643    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:56:38.317656    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:56:38.321816    4147 logs.go:123] Gathering logs for etcd [77caab578172] ...
	I0311 13:56:38.321823    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77caab578172"
	I0311 13:56:38.336142    4147 logs.go:123] Gathering logs for kube-scheduler [d64c5c8a52ca] ...
	I0311 13:56:38.336154    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64c5c8a52ca"
	I0311 13:56:38.352405    4147 logs.go:123] Gathering logs for kube-controller-manager [d2bff02933cb] ...
	I0311 13:56:38.352420    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bff02933cb"
	I0311 13:56:40.870242    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:56:45.871734    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:56:45.871845    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:56:45.884195    4147 logs.go:276] 2 containers: [13edbf8d8a09 79e4869035fa]
	I0311 13:56:45.884269    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:56:45.895153    4147 logs.go:276] 2 containers: [032c1ac7c6a6 77caab578172]
	I0311 13:56:45.895218    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:56:45.909634    4147 logs.go:276] 1 containers: [ae741f62ad7c]
	I0311 13:56:45.909708    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:56:45.920503    4147 logs.go:276] 2 containers: [4151959a3f28 d64c5c8a52ca]
	I0311 13:56:45.920577    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:56:45.931662    4147 logs.go:276] 1 containers: [66941007e29d]
	I0311 13:56:45.931736    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:56:45.942862    4147 logs.go:276] 2 containers: [b61bca5e426d d2bff02933cb]
	I0311 13:56:45.942927    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:56:45.952976    4147 logs.go:276] 0 containers: []
	W0311 13:56:45.952987    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:56:45.953049    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:56:45.965220    4147 logs.go:276] 1 containers: [5aa3331a9bc9]
	I0311 13:56:45.965240    4147 logs.go:123] Gathering logs for kube-controller-manager [d2bff02933cb] ...
	I0311 13:56:45.965247    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bff02933cb"
	I0311 13:56:45.980934    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:56:45.980948    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:56:46.018382    4147 logs.go:123] Gathering logs for kube-scheduler [d64c5c8a52ca] ...
	I0311 13:56:46.018395    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64c5c8a52ca"
	I0311 13:56:46.034767    4147 logs.go:123] Gathering logs for kube-proxy [66941007e29d] ...
	I0311 13:56:46.034782    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66941007e29d"
	I0311 13:56:46.046870    4147 logs.go:123] Gathering logs for kube-controller-manager [b61bca5e426d] ...
	I0311 13:56:46.046884    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b61bca5e426d"
	I0311 13:56:46.063970    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:56:46.063981    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:56:46.088015    4147 logs.go:123] Gathering logs for kube-apiserver [13edbf8d8a09] ...
	I0311 13:56:46.088022    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13edbf8d8a09"
	I0311 13:56:46.102817    4147 logs.go:123] Gathering logs for coredns [ae741f62ad7c] ...
	I0311 13:56:46.102829    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae741f62ad7c"
	I0311 13:56:46.113775    4147 logs.go:123] Gathering logs for kube-scheduler [4151959a3f28] ...
	I0311 13:56:46.113787    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4151959a3f28"
	I0311 13:56:46.128253    4147 logs.go:123] Gathering logs for storage-provisioner [5aa3331a9bc9] ...
	I0311 13:56:46.128263    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa3331a9bc9"
	I0311 13:56:46.146918    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:56:46.146928    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:56:46.151056    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:56:46.151065    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:56:46.189852    4147 logs.go:123] Gathering logs for kube-apiserver [79e4869035fa] ...
	I0311 13:56:46.189863    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79e4869035fa"
	I0311 13:56:46.227292    4147 logs.go:123] Gathering logs for etcd [032c1ac7c6a6] ...
	I0311 13:56:46.227301    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032c1ac7c6a6"
	I0311 13:56:46.241078    4147 logs.go:123] Gathering logs for etcd [77caab578172] ...
	I0311 13:56:46.241089    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77caab578172"
	I0311 13:56:46.255351    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:56:46.255362    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:56:48.768968    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:56:53.770669    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:56:53.770953    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:56:53.799903    4147 logs.go:276] 2 containers: [13edbf8d8a09 79e4869035fa]
	I0311 13:56:53.800032    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:56:53.819435    4147 logs.go:276] 2 containers: [032c1ac7c6a6 77caab578172]
	I0311 13:56:53.819515    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:56:53.832423    4147 logs.go:276] 1 containers: [ae741f62ad7c]
	I0311 13:56:53.832495    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:56:53.844314    4147 logs.go:276] 2 containers: [4151959a3f28 d64c5c8a52ca]
	I0311 13:56:53.844386    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:56:53.855231    4147 logs.go:276] 1 containers: [66941007e29d]
	I0311 13:56:53.855301    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:56:53.865660    4147 logs.go:276] 2 containers: [b61bca5e426d d2bff02933cb]
	I0311 13:56:53.865732    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:56:53.878625    4147 logs.go:276] 0 containers: []
	W0311 13:56:53.878637    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:56:53.878701    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:56:53.888649    4147 logs.go:276] 1 containers: [5aa3331a9bc9]
	I0311 13:56:53.888671    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:56:53.888677    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:56:53.900562    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:56:53.900575    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:56:53.936165    4147 logs.go:123] Gathering logs for storage-provisioner [5aa3331a9bc9] ...
	I0311 13:56:53.936177    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa3331a9bc9"
	I0311 13:56:53.948305    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:56:53.948314    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:56:53.971895    4147 logs.go:123] Gathering logs for kube-controller-manager [b61bca5e426d] ...
	I0311 13:56:53.971908    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b61bca5e426d"
	I0311 13:56:53.989054    4147 logs.go:123] Gathering logs for kube-controller-manager [d2bff02933cb] ...
	I0311 13:56:53.989065    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bff02933cb"
	I0311 13:56:54.003763    4147 logs.go:123] Gathering logs for kube-apiserver [13edbf8d8a09] ...
	I0311 13:56:54.003777    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13edbf8d8a09"
	I0311 13:56:54.017674    4147 logs.go:123] Gathering logs for coredns [ae741f62ad7c] ...
	I0311 13:56:54.017685    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae741f62ad7c"
	I0311 13:56:54.029561    4147 logs.go:123] Gathering logs for kube-scheduler [d64c5c8a52ca] ...
	I0311 13:56:54.029573    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64c5c8a52ca"
	I0311 13:56:54.050348    4147 logs.go:123] Gathering logs for kube-proxy [66941007e29d] ...
	I0311 13:56:54.050362    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66941007e29d"
	I0311 13:56:54.062065    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:56:54.062084    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:56:54.099747    4147 logs.go:123] Gathering logs for kube-apiserver [79e4869035fa] ...
	I0311 13:56:54.099757    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79e4869035fa"
	I0311 13:56:54.137813    4147 logs.go:123] Gathering logs for kube-scheduler [4151959a3f28] ...
	I0311 13:56:54.137824    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4151959a3f28"
	I0311 13:56:54.149239    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:56:54.149249    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:56:54.153379    4147 logs.go:123] Gathering logs for etcd [032c1ac7c6a6] ...
	I0311 13:56:54.153386    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032c1ac7c6a6"
	I0311 13:56:54.166703    4147 logs.go:123] Gathering logs for etcd [77caab578172] ...
	I0311 13:56:54.166715    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77caab578172"
	I0311 13:56:56.682567    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:57:01.684826    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:57:01.685170    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:57:01.716687    4147 logs.go:276] 2 containers: [13edbf8d8a09 79e4869035fa]
	I0311 13:57:01.716822    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:57:01.735717    4147 logs.go:276] 2 containers: [032c1ac7c6a6 77caab578172]
	I0311 13:57:01.735850    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:57:01.753908    4147 logs.go:276] 1 containers: [ae741f62ad7c]
	I0311 13:57:01.753981    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:57:01.765788    4147 logs.go:276] 2 containers: [4151959a3f28 d64c5c8a52ca]
	I0311 13:57:01.765861    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:57:01.776343    4147 logs.go:276] 1 containers: [66941007e29d]
	I0311 13:57:01.776410    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:57:01.787122    4147 logs.go:276] 2 containers: [b61bca5e426d d2bff02933cb]
	I0311 13:57:01.787192    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:57:01.797685    4147 logs.go:276] 0 containers: []
	W0311 13:57:01.797696    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:57:01.797748    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:57:01.807560    4147 logs.go:276] 1 containers: [5aa3331a9bc9]
	I0311 13:57:01.807580    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:57:01.807585    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:57:01.831060    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:57:01.831074    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:57:01.835998    4147 logs.go:123] Gathering logs for kube-scheduler [4151959a3f28] ...
	I0311 13:57:01.836007    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4151959a3f28"
	I0311 13:57:01.848616    4147 logs.go:123] Gathering logs for kube-apiserver [13edbf8d8a09] ...
	I0311 13:57:01.848627    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13edbf8d8a09"
	I0311 13:57:01.862835    4147 logs.go:123] Gathering logs for kube-apiserver [79e4869035fa] ...
	I0311 13:57:01.862845    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79e4869035fa"
	I0311 13:57:01.903615    4147 logs.go:123] Gathering logs for etcd [77caab578172] ...
	I0311 13:57:01.903629    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77caab578172"
	I0311 13:57:01.922560    4147 logs.go:123] Gathering logs for kube-proxy [66941007e29d] ...
	I0311 13:57:01.922574    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66941007e29d"
	I0311 13:57:01.933906    4147 logs.go:123] Gathering logs for kube-controller-manager [b61bca5e426d] ...
	I0311 13:57:01.933918    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b61bca5e426d"
	I0311 13:57:01.951178    4147 logs.go:123] Gathering logs for storage-provisioner [5aa3331a9bc9] ...
	I0311 13:57:01.951187    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa3331a9bc9"
	I0311 13:57:01.962725    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:57:01.962736    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:57:02.001484    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:57:02.001495    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:57:02.036636    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:57:02.036647    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:57:02.050008    4147 logs.go:123] Gathering logs for etcd [032c1ac7c6a6] ...
	I0311 13:57:02.050018    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032c1ac7c6a6"
	I0311 13:57:02.065841    4147 logs.go:123] Gathering logs for kube-controller-manager [d2bff02933cb] ...
	I0311 13:57:02.065852    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bff02933cb"
	I0311 13:57:02.080878    4147 logs.go:123] Gathering logs for coredns [ae741f62ad7c] ...
	I0311 13:57:02.080890    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae741f62ad7c"
	I0311 13:57:02.092612    4147 logs.go:123] Gathering logs for kube-scheduler [d64c5c8a52ca] ...
	I0311 13:57:02.092624    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64c5c8a52ca"
	I0311 13:57:04.610103    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:57:09.610857    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:57:09.611174    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:57:09.641553    4147 logs.go:276] 2 containers: [13edbf8d8a09 79e4869035fa]
	I0311 13:57:09.641687    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:57:09.660967    4147 logs.go:276] 2 containers: [032c1ac7c6a6 77caab578172]
	I0311 13:57:09.661089    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:57:09.675728    4147 logs.go:276] 1 containers: [ae741f62ad7c]
	I0311 13:57:09.675801    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:57:09.688092    4147 logs.go:276] 2 containers: [4151959a3f28 d64c5c8a52ca]
	I0311 13:57:09.688166    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:57:09.698774    4147 logs.go:276] 1 containers: [66941007e29d]
	I0311 13:57:09.698844    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:57:09.709497    4147 logs.go:276] 2 containers: [b61bca5e426d d2bff02933cb]
	I0311 13:57:09.709562    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:57:09.728443    4147 logs.go:276] 0 containers: []
	W0311 13:57:09.728454    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:57:09.728515    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:57:09.744079    4147 logs.go:276] 1 containers: [5aa3331a9bc9]
	I0311 13:57:09.744095    4147 logs.go:123] Gathering logs for kube-apiserver [13edbf8d8a09] ...
	I0311 13:57:09.744101    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13edbf8d8a09"
	I0311 13:57:09.758629    4147 logs.go:123] Gathering logs for kube-scheduler [4151959a3f28] ...
	I0311 13:57:09.758642    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4151959a3f28"
	I0311 13:57:09.770367    4147 logs.go:123] Gathering logs for kube-controller-manager [d2bff02933cb] ...
	I0311 13:57:09.770377    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bff02933cb"
	I0311 13:57:09.785355    4147 logs.go:123] Gathering logs for coredns [ae741f62ad7c] ...
	I0311 13:57:09.785365    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae741f62ad7c"
	I0311 13:57:09.798557    4147 logs.go:123] Gathering logs for kube-controller-manager [b61bca5e426d] ...
	I0311 13:57:09.798572    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b61bca5e426d"
	I0311 13:57:09.815730    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:57:09.815740    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:57:09.827704    4147 logs.go:123] Gathering logs for kube-apiserver [79e4869035fa] ...
	I0311 13:57:09.827718    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79e4869035fa"
	I0311 13:57:09.865720    4147 logs.go:123] Gathering logs for kube-scheduler [d64c5c8a52ca] ...
	I0311 13:57:09.865732    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64c5c8a52ca"
	I0311 13:57:09.886562    4147 logs.go:123] Gathering logs for storage-provisioner [5aa3331a9bc9] ...
	I0311 13:57:09.886575    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa3331a9bc9"
	I0311 13:57:09.897946    4147 logs.go:123] Gathering logs for etcd [032c1ac7c6a6] ...
	I0311 13:57:09.897955    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032c1ac7c6a6"
	I0311 13:57:09.911350    4147 logs.go:123] Gathering logs for etcd [77caab578172] ...
	I0311 13:57:09.911360    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77caab578172"
	I0311 13:57:09.925164    4147 logs.go:123] Gathering logs for kube-proxy [66941007e29d] ...
	I0311 13:57:09.925174    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66941007e29d"
	I0311 13:57:09.949505    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:57:09.949516    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:57:09.973895    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:57:09.973913    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:57:10.013851    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:57:10.013862    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:57:10.018108    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:57:10.018117    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:57:12.557592    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:57:17.559634    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:57:17.559837    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:57:17.576546    4147 logs.go:276] 2 containers: [13edbf8d8a09 79e4869035fa]
	I0311 13:57:17.576635    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:57:17.589637    4147 logs.go:276] 2 containers: [032c1ac7c6a6 77caab578172]
	I0311 13:57:17.589712    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:57:17.600607    4147 logs.go:276] 1 containers: [ae741f62ad7c]
	I0311 13:57:17.600683    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:57:17.611010    4147 logs.go:276] 2 containers: [4151959a3f28 d64c5c8a52ca]
	I0311 13:57:17.611084    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:57:17.621771    4147 logs.go:276] 1 containers: [66941007e29d]
	I0311 13:57:17.621841    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:57:17.632129    4147 logs.go:276] 2 containers: [b61bca5e426d d2bff02933cb]
	I0311 13:57:17.632201    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:57:17.644359    4147 logs.go:276] 0 containers: []
	W0311 13:57:17.644370    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:57:17.644427    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:57:17.656685    4147 logs.go:276] 1 containers: [5aa3331a9bc9]
	I0311 13:57:17.656704    4147 logs.go:123] Gathering logs for kube-controller-manager [b61bca5e426d] ...
	I0311 13:57:17.656709    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b61bca5e426d"
	I0311 13:57:17.674157    4147 logs.go:123] Gathering logs for storage-provisioner [5aa3331a9bc9] ...
	I0311 13:57:17.674167    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa3331a9bc9"
	I0311 13:57:17.685943    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:57:17.685956    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:57:17.698216    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:57:17.698230    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:57:17.733854    4147 logs.go:123] Gathering logs for coredns [ae741f62ad7c] ...
	I0311 13:57:17.733870    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae741f62ad7c"
	I0311 13:57:17.745754    4147 logs.go:123] Gathering logs for kube-scheduler [4151959a3f28] ...
	I0311 13:57:17.745767    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4151959a3f28"
	I0311 13:57:17.758864    4147 logs.go:123] Gathering logs for kube-scheduler [d64c5c8a52ca] ...
	I0311 13:57:17.758879    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64c5c8a52ca"
	I0311 13:57:17.774651    4147 logs.go:123] Gathering logs for kube-proxy [66941007e29d] ...
	I0311 13:57:17.774662    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66941007e29d"
	I0311 13:57:17.786687    4147 logs.go:123] Gathering logs for kube-controller-manager [d2bff02933cb] ...
	I0311 13:57:17.786697    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bff02933cb"
	I0311 13:57:17.801438    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:57:17.801446    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:57:17.824557    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:57:17.824564    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:57:17.863034    4147 logs.go:123] Gathering logs for kube-apiserver [13edbf8d8a09] ...
	I0311 13:57:17.863049    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13edbf8d8a09"
	I0311 13:57:17.877404    4147 logs.go:123] Gathering logs for kube-apiserver [79e4869035fa] ...
	I0311 13:57:17.877414    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79e4869035fa"
	I0311 13:57:17.915882    4147 logs.go:123] Gathering logs for etcd [77caab578172] ...
	I0311 13:57:17.915895    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77caab578172"
	I0311 13:57:17.930302    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:57:17.930316    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:57:17.934851    4147 logs.go:123] Gathering logs for etcd [032c1ac7c6a6] ...
	I0311 13:57:17.934860    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032c1ac7c6a6"
	I0311 13:57:20.450596    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:57:25.451967    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:57:25.452372    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:57:25.496285    4147 logs.go:276] 2 containers: [13edbf8d8a09 79e4869035fa]
	I0311 13:57:25.496429    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:57:25.520107    4147 logs.go:276] 2 containers: [032c1ac7c6a6 77caab578172]
	I0311 13:57:25.520197    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:57:25.534118    4147 logs.go:276] 1 containers: [ae741f62ad7c]
	I0311 13:57:25.534204    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:57:25.546161    4147 logs.go:276] 2 containers: [4151959a3f28 d64c5c8a52ca]
	I0311 13:57:25.546230    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:57:25.560386    4147 logs.go:276] 1 containers: [66941007e29d]
	I0311 13:57:25.560444    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:57:25.570679    4147 logs.go:276] 2 containers: [b61bca5e426d d2bff02933cb]
	I0311 13:57:25.570742    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:57:25.581119    4147 logs.go:276] 0 containers: []
	W0311 13:57:25.581130    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:57:25.581190    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:57:25.591374    4147 logs.go:276] 1 containers: [5aa3331a9bc9]
	I0311 13:57:25.591391    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:57:25.591397    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:57:25.625485    4147 logs.go:123] Gathering logs for kube-apiserver [13edbf8d8a09] ...
	I0311 13:57:25.625496    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13edbf8d8a09"
	I0311 13:57:25.640183    4147 logs.go:123] Gathering logs for kube-scheduler [d64c5c8a52ca] ...
	I0311 13:57:25.640195    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64c5c8a52ca"
	I0311 13:57:25.657504    4147 logs.go:123] Gathering logs for kube-proxy [66941007e29d] ...
	I0311 13:57:25.657515    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66941007e29d"
	I0311 13:57:25.669223    4147 logs.go:123] Gathering logs for kube-controller-manager [d2bff02933cb] ...
	I0311 13:57:25.669235    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bff02933cb"
	I0311 13:57:25.683829    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:57:25.683840    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:57:25.722289    4147 logs.go:123] Gathering logs for etcd [77caab578172] ...
	I0311 13:57:25.722299    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77caab578172"
	I0311 13:57:25.736772    4147 logs.go:123] Gathering logs for kube-scheduler [4151959a3f28] ...
	I0311 13:57:25.736782    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4151959a3f28"
	I0311 13:57:25.757263    4147 logs.go:123] Gathering logs for kube-controller-manager [b61bca5e426d] ...
	I0311 13:57:25.757275    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b61bca5e426d"
	I0311 13:57:25.777618    4147 logs.go:123] Gathering logs for storage-provisioner [5aa3331a9bc9] ...
	I0311 13:57:25.777629    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa3331a9bc9"
	I0311 13:57:25.789077    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:57:25.789087    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:57:25.793353    4147 logs.go:123] Gathering logs for kube-apiserver [79e4869035fa] ...
	I0311 13:57:25.793359    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79e4869035fa"
	I0311 13:57:25.831272    4147 logs.go:123] Gathering logs for etcd [032c1ac7c6a6] ...
	I0311 13:57:25.831285    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032c1ac7c6a6"
	I0311 13:57:25.847091    4147 logs.go:123] Gathering logs for coredns [ae741f62ad7c] ...
	I0311 13:57:25.847105    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae741f62ad7c"
	I0311 13:57:25.861753    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:57:25.861766    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:57:25.884540    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:57:25.884549    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:57:28.397494    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:57:33.400029    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:57:33.400534    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:57:33.440793    4147 logs.go:276] 2 containers: [13edbf8d8a09 79e4869035fa]
	I0311 13:57:33.440934    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:57:33.462406    4147 logs.go:276] 2 containers: [032c1ac7c6a6 77caab578172]
	I0311 13:57:33.462525    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:57:33.485786    4147 logs.go:276] 1 containers: [ae741f62ad7c]
	I0311 13:57:33.485862    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:57:33.503451    4147 logs.go:276] 2 containers: [4151959a3f28 d64c5c8a52ca]
	I0311 13:57:33.503529    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:57:33.514896    4147 logs.go:276] 1 containers: [66941007e29d]
	I0311 13:57:33.514968    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:57:33.525573    4147 logs.go:276] 2 containers: [b61bca5e426d d2bff02933cb]
	I0311 13:57:33.525644    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:57:33.535278    4147 logs.go:276] 0 containers: []
	W0311 13:57:33.535289    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:57:33.535348    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:57:33.545921    4147 logs.go:276] 1 containers: [5aa3331a9bc9]
	I0311 13:57:33.545940    4147 logs.go:123] Gathering logs for kube-apiserver [79e4869035fa] ...
	I0311 13:57:33.545946    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79e4869035fa"
	I0311 13:57:33.586833    4147 logs.go:123] Gathering logs for kube-scheduler [4151959a3f28] ...
	I0311 13:57:33.586843    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4151959a3f28"
	I0311 13:57:33.598471    4147 logs.go:123] Gathering logs for kube-scheduler [d64c5c8a52ca] ...
	I0311 13:57:33.598484    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64c5c8a52ca"
	I0311 13:57:33.613816    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:57:33.613825    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:57:33.626181    4147 logs.go:123] Gathering logs for etcd [032c1ac7c6a6] ...
	I0311 13:57:33.626194    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032c1ac7c6a6"
	I0311 13:57:33.639674    4147 logs.go:123] Gathering logs for coredns [ae741f62ad7c] ...
	I0311 13:57:33.639686    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae741f62ad7c"
	I0311 13:57:33.650748    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:57:33.650758    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:57:33.673900    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:57:33.673909    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:57:33.712401    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:57:33.712411    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:57:33.716605    4147 logs.go:123] Gathering logs for kube-proxy [66941007e29d] ...
	I0311 13:57:33.716613    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66941007e29d"
	I0311 13:57:33.728212    4147 logs.go:123] Gathering logs for kube-controller-manager [d2bff02933cb] ...
	I0311 13:57:33.728227    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bff02933cb"
	I0311 13:57:33.743254    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:57:33.743267    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:57:33.777698    4147 logs.go:123] Gathering logs for kube-apiserver [13edbf8d8a09] ...
	I0311 13:57:33.777709    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13edbf8d8a09"
	I0311 13:57:33.792595    4147 logs.go:123] Gathering logs for etcd [77caab578172] ...
	I0311 13:57:33.792608    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77caab578172"
	I0311 13:57:33.807378    4147 logs.go:123] Gathering logs for kube-controller-manager [b61bca5e426d] ...
	I0311 13:57:33.807391    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b61bca5e426d"
	I0311 13:57:33.824635    4147 logs.go:123] Gathering logs for storage-provisioner [5aa3331a9bc9] ...
	I0311 13:57:33.824647    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa3331a9bc9"
	I0311 13:57:36.337564    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:57:41.339888    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:57:41.340102    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:57:41.354514    4147 logs.go:276] 2 containers: [13edbf8d8a09 79e4869035fa]
	I0311 13:57:41.354593    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:57:41.366455    4147 logs.go:276] 2 containers: [032c1ac7c6a6 77caab578172]
	I0311 13:57:41.366523    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:57:41.376767    4147 logs.go:276] 1 containers: [ae741f62ad7c]
	I0311 13:57:41.376835    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:57:41.387537    4147 logs.go:276] 2 containers: [4151959a3f28 d64c5c8a52ca]
	I0311 13:57:41.387611    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:57:41.398578    4147 logs.go:276] 1 containers: [66941007e29d]
	I0311 13:57:41.398645    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:57:41.409378    4147 logs.go:276] 2 containers: [b61bca5e426d d2bff02933cb]
	I0311 13:57:41.409451    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:57:41.419685    4147 logs.go:276] 0 containers: []
	W0311 13:57:41.419695    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:57:41.419756    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:57:41.430054    4147 logs.go:276] 1 containers: [5aa3331a9bc9]
	I0311 13:57:41.430071    4147 logs.go:123] Gathering logs for kube-proxy [66941007e29d] ...
	I0311 13:57:41.430077    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66941007e29d"
	I0311 13:57:41.441906    4147 logs.go:123] Gathering logs for kube-controller-manager [b61bca5e426d] ...
	I0311 13:57:41.441917    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b61bca5e426d"
	I0311 13:57:41.459992    4147 logs.go:123] Gathering logs for storage-provisioner [5aa3331a9bc9] ...
	I0311 13:57:41.460002    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa3331a9bc9"
	I0311 13:57:41.475648    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:57:41.475660    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:57:41.513184    4147 logs.go:123] Gathering logs for kube-controller-manager [d2bff02933cb] ...
	I0311 13:57:41.513197    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2bff02933cb"
	I0311 13:57:41.528491    4147 logs.go:123] Gathering logs for kube-apiserver [79e4869035fa] ...
	I0311 13:57:41.528501    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79e4869035fa"
	I0311 13:57:41.564459    4147 logs.go:123] Gathering logs for etcd [032c1ac7c6a6] ...
	I0311 13:57:41.564472    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 032c1ac7c6a6"
	I0311 13:57:41.579084    4147 logs.go:123] Gathering logs for etcd [77caab578172] ...
	I0311 13:57:41.579095    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77caab578172"
	I0311 13:57:41.593399    4147 logs.go:123] Gathering logs for coredns [ae741f62ad7c] ...
	I0311 13:57:41.593409    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae741f62ad7c"
	I0311 13:57:41.605184    4147 logs.go:123] Gathering logs for kube-scheduler [4151959a3f28] ...
	I0311 13:57:41.605196    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4151959a3f28"
	I0311 13:57:41.617224    4147 logs.go:123] Gathering logs for kube-scheduler [d64c5c8a52ca] ...
	I0311 13:57:41.617234    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d64c5c8a52ca"
	I0311 13:57:41.632892    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:57:41.632903    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:57:41.655007    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:57:41.655014    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:57:41.695432    4147 logs.go:123] Gathering logs for kube-apiserver [13edbf8d8a09] ...
	I0311 13:57:41.695443    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13edbf8d8a09"
	I0311 13:57:41.709570    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:57:41.709580    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:57:41.721748    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:57:41.721760    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:57:44.228295    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:57:49.230448    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:57:49.230491    4147 kubeadm.go:591] duration metric: took 4m3.932571167s to restartPrimaryControlPlane
	W0311 13:57:49.230524    4147 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0311 13:57:49.230539    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0311 13:57:50.309909    4147 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.079392334s)
	I0311 13:57:50.310211    4147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 13:57:50.315741    4147 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 13:57:50.318667    4147 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 13:57:50.321535    4147 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 13:57:50.321544    4147 kubeadm.go:156] found existing configuration files:
	
	I0311 13:57:50.321568    4147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/admin.conf
	I0311 13:57:50.324369    4147 kubeadm.go:162] "https://control-plane.minikube.internal:50301" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 13:57:50.324391    4147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 13:57:50.326987    4147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/kubelet.conf
	I0311 13:57:50.329595    4147 kubeadm.go:162] "https://control-plane.minikube.internal:50301" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 13:57:50.329635    4147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 13:57:50.333170    4147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/controller-manager.conf
	I0311 13:57:50.336275    4147 kubeadm.go:162] "https://control-plane.minikube.internal:50301" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 13:57:50.336316    4147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 13:57:50.339304    4147 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/scheduler.conf
	I0311 13:57:50.342327    4147 kubeadm.go:162] "https://control-plane.minikube.internal:50301" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50301 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 13:57:50.342368    4147 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 13:57:50.345428    4147 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0311 13:57:50.364270    4147 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0311 13:57:50.364336    4147 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 13:57:50.414171    4147 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 13:57:50.414278    4147 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 13:57:50.414351    4147 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 13:57:50.462628    4147 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 13:57:50.466878    4147 out.go:204]   - Generating certificates and keys ...
	I0311 13:57:50.466918    4147 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 13:57:50.466952    4147 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 13:57:50.467002    4147 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0311 13:57:50.467030    4147 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0311 13:57:50.467060    4147 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0311 13:57:50.467087    4147 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0311 13:57:50.467117    4147 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0311 13:57:50.467147    4147 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0311 13:57:50.467185    4147 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0311 13:57:50.467229    4147 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0311 13:57:50.467260    4147 kubeadm.go:309] [certs] Using the existing "sa" key
	I0311 13:57:50.467294    4147 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 13:57:50.597901    4147 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 13:57:50.725590    4147 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 13:57:50.870651    4147 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 13:57:51.041352    4147 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 13:57:51.074066    4147 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 13:57:51.074419    4147 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 13:57:51.074494    4147 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 13:57:51.140528    4147 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 13:57:51.144354    4147 out.go:204]   - Booting up control plane ...
	I0311 13:57:51.144453    4147 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 13:57:51.144511    4147 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 13:57:51.144616    4147 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 13:57:51.144855    4147 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 13:57:51.146805    4147 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 13:57:55.646558    4147 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501700 seconds
	I0311 13:57:55.646623    4147 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 13:57:55.650454    4147 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 13:57:56.169954    4147 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 13:57:56.170274    4147 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-517000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 13:57:56.673451    4147 kubeadm.go:309] [bootstrap-token] Using token: gume1m.hmi0p4uac7yg3e74
	I0311 13:57:56.676835    4147 out.go:204]   - Configuring RBAC rules ...
	I0311 13:57:56.676896    4147 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 13:57:56.676957    4147 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 13:57:56.682601    4147 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 13:57:56.683625    4147 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 13:57:56.684419    4147 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 13:57:56.685277    4147 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 13:57:56.688778    4147 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 13:57:56.823845    4147 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 13:57:57.078255    4147 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 13:57:57.078930    4147 kubeadm.go:309] 
	I0311 13:57:57.078976    4147 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 13:57:57.078980    4147 kubeadm.go:309] 
	I0311 13:57:57.079027    4147 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 13:57:57.079034    4147 kubeadm.go:309] 
	I0311 13:57:57.079047    4147 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 13:57:57.079077    4147 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 13:57:57.079121    4147 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 13:57:57.079128    4147 kubeadm.go:309] 
	I0311 13:57:57.079157    4147 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 13:57:57.079160    4147 kubeadm.go:309] 
	I0311 13:57:57.079186    4147 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 13:57:57.079189    4147 kubeadm.go:309] 
	I0311 13:57:57.079216    4147 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 13:57:57.079257    4147 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 13:57:57.079311    4147 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 13:57:57.079319    4147 kubeadm.go:309] 
	I0311 13:57:57.079377    4147 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 13:57:57.079437    4147 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 13:57:57.079441    4147 kubeadm.go:309] 
	I0311 13:57:57.079511    4147 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token gume1m.hmi0p4uac7yg3e74 \
	I0311 13:57:57.079599    4147 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b0f804fee3777fe090204338c70c85586d2b95499c0fea24e08ef3935500f54b \
	I0311 13:57:57.079618    4147 kubeadm.go:309] 	--control-plane 
	I0311 13:57:57.079621    4147 kubeadm.go:309] 
	I0311 13:57:57.079682    4147 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 13:57:57.079685    4147 kubeadm.go:309] 
	I0311 13:57:57.079744    4147 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token gume1m.hmi0p4uac7yg3e74 \
	I0311 13:57:57.079823    4147 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b0f804fee3777fe090204338c70c85586d2b95499c0fea24e08ef3935500f54b 
	I0311 13:57:57.080434    4147 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 13:57:57.080445    4147 cni.go:84] Creating CNI manager for ""
	I0311 13:57:57.080455    4147 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 13:57:57.083228    4147 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0311 13:57:57.090341    4147 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0311 13:57:57.093548    4147 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0311 13:57:57.098511    4147 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 13:57:57.098562    4147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 13:57:57.098567    4147 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-517000 minikube.k8s.io/updated_at=2024_03_11T13_57_57_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=stopped-upgrade-517000 minikube.k8s.io/primary=true
	I0311 13:57:57.134643    4147 kubeadm.go:1106] duration metric: took 36.118958ms to wait for elevateKubeSystemPrivileges
	I0311 13:57:57.139855    4147 ops.go:34] apiserver oom_adj: -16
	W0311 13:57:57.139972    4147 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 13:57:57.139978    4147 kubeadm.go:393] duration metric: took 4m11.861143292s to StartCluster
	I0311 13:57:57.139988    4147 settings.go:142] acquiring lock: {Name:mkde8963c2fec7d8df74a4e81a4ba3233d320136 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:57:57.140075    4147 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:57:57.140445    4147 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/kubeconfig: {Name:mkd61d3fa94ba0392c00bb2cce43bcec89e45a02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:57:57.141180    4147 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 13:57:57.145361    4147 out.go:177] * Verifying Kubernetes components...
	I0311 13:57:57.141192    4147 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0311 13:57:57.141251    4147 config.go:182] Loaded profile config "stopped-upgrade-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0311 13:57:57.152271    4147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 13:57:57.152278    4147 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-517000"
	I0311 13:57:57.152272    4147 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-517000"
	I0311 13:57:57.152292    4147 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-517000"
	I0311 13:57:57.152302    4147 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-517000"
	W0311 13:57:57.152308    4147 addons.go:243] addon storage-provisioner should already be in state true
	I0311 13:57:57.152321    4147 host.go:66] Checking if "stopped-upgrade-517000" exists ...
	I0311 13:57:57.152778    4147 retry.go:31] will retry after 1.288132439s: connect: dial unix /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/stopped-upgrade-517000/monitor: connect: connection refused
	I0311 13:57:57.157312    4147 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 13:57:57.161318    4147 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 13:57:57.161324    4147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 13:57:57.161331    4147 sshutil.go:53] new ssh client: &{IP:localhost Port:50269 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/stopped-upgrade-517000/id_rsa Username:docker}
	I0311 13:57:57.220394    4147 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 13:57:57.225490    4147 api_server.go:52] waiting for apiserver process to appear ...
	I0311 13:57:57.225529    4147 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 13:57:57.230045    4147 api_server.go:72] duration metric: took 88.856042ms to wait for apiserver process to appear ...
	I0311 13:57:57.230053    4147 api_server.go:88] waiting for apiserver healthz status ...
	I0311 13:57:57.230061    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:57:57.245553    4147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 13:57:58.442685    4147 kapi.go:59] client config for stopped-upgrade-517000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/stopped-upgrade-517000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/stopped-upgrade-517000/client.key", CAFile:"/Users/jenkins/minikube-integration/18358-1220/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103d93fd0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0311 13:57:58.442826    4147 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-517000"
	W0311 13:57:58.442832    4147 addons.go:243] addon default-storageclass should already be in state true
	I0311 13:57:58.442846    4147 host.go:66] Checking if "stopped-upgrade-517000" exists ...
	I0311 13:57:58.443694    4147 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 13:57:58.443701    4147 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 13:57:58.443707    4147 sshutil.go:53] new ssh client: &{IP:localhost Port:50269 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/stopped-upgrade-517000/id_rsa Username:docker}
	I0311 13:57:58.483465    4147 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 13:58:02.231967    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:58:02.231990    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:58:07.232027    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:58:07.232054    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:58:12.232166    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:58:12.232187    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:58:17.232354    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:58:17.232377    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:58:22.232642    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:58:22.232661    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:58:27.233040    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:58:27.233098    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0311 13:58:28.534675    4147 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0311 13:58:28.540065    4147 out.go:177] * Enabled addons: storage-provisioner
	I0311 13:58:28.546019    4147 addons.go:505] duration metric: took 31.405837042s for enable addons: enabled=[storage-provisioner]
	I0311 13:58:32.233784    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:58:32.233811    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:58:37.234563    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:58:37.234620    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:58:42.235695    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:58:42.235739    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:58:47.237193    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:58:47.237228    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:58:52.238956    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:58:52.238979    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:58:57.240155    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:58:57.240263    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:58:57.251076    4147 logs.go:276] 1 containers: [cd6426b65374]
	I0311 13:58:57.251143    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:58:57.261745    4147 logs.go:276] 1 containers: [c78c5ca8b4ac]
	I0311 13:58:57.261818    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:58:57.272072    4147 logs.go:276] 2 containers: [2dd7a5b4c30b 9472bc52aa3f]
	I0311 13:58:57.272144    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:58:57.282562    4147 logs.go:276] 1 containers: [5f3d696666c1]
	I0311 13:58:57.282624    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:58:57.293413    4147 logs.go:276] 1 containers: [bff0a9595bb6]
	I0311 13:58:57.293489    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:58:57.304394    4147 logs.go:276] 1 containers: [08875e3858c8]
	I0311 13:58:57.304459    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:58:57.314573    4147 logs.go:276] 0 containers: []
	W0311 13:58:57.314585    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:58:57.314651    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:58:57.324721    4147 logs.go:276] 1 containers: [e3c3c6294347]
	I0311 13:58:57.324736    4147 logs.go:123] Gathering logs for kube-scheduler [5f3d696666c1] ...
	I0311 13:58:57.324742    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f3d696666c1"
	I0311 13:58:57.340075    4147 logs.go:123] Gathering logs for storage-provisioner [e3c3c6294347] ...
	I0311 13:58:57.340090    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3c3c6294347"
	I0311 13:58:57.355480    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:58:57.355492    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:58:57.359641    4147 logs.go:123] Gathering logs for etcd [c78c5ca8b4ac] ...
	I0311 13:58:57.359647    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78c5ca8b4ac"
	I0311 13:58:57.372867    4147 logs.go:123] Gathering logs for kube-apiserver [cd6426b65374] ...
	I0311 13:58:57.372878    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6426b65374"
	I0311 13:58:57.386916    4147 logs.go:123] Gathering logs for coredns [2dd7a5b4c30b] ...
	I0311 13:58:57.386927    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd7a5b4c30b"
	I0311 13:58:57.398492    4147 logs.go:123] Gathering logs for coredns [9472bc52aa3f] ...
	I0311 13:58:57.398503    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9472bc52aa3f"
	I0311 13:58:57.410188    4147 logs.go:123] Gathering logs for kube-proxy [bff0a9595bb6] ...
	I0311 13:58:57.410200    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff0a9595bb6"
	I0311 13:58:57.422255    4147 logs.go:123] Gathering logs for kube-controller-manager [08875e3858c8] ...
	I0311 13:58:57.422267    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08875e3858c8"
	I0311 13:58:57.439574    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:58:57.439587    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:58:57.464589    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:58:57.464602    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:58:57.502648    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:58:57.502660    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:58:57.539438    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:58:57.539451    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:59:00.053288    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:59:05.055539    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:59:05.055717    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:59:05.067253    4147 logs.go:276] 1 containers: [cd6426b65374]
	I0311 13:59:05.067336    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:59:05.080471    4147 logs.go:276] 1 containers: [c78c5ca8b4ac]
	I0311 13:59:05.080547    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:59:05.091605    4147 logs.go:276] 2 containers: [2dd7a5b4c30b 9472bc52aa3f]
	I0311 13:59:05.091675    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:59:05.102835    4147 logs.go:276] 1 containers: [5f3d696666c1]
	I0311 13:59:05.102913    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:59:05.113519    4147 logs.go:276] 1 containers: [bff0a9595bb6]
	I0311 13:59:05.113587    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:59:05.124060    4147 logs.go:276] 1 containers: [08875e3858c8]
	I0311 13:59:05.124126    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:59:05.134224    4147 logs.go:276] 0 containers: []
	W0311 13:59:05.134234    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:59:05.134287    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:59:05.144985    4147 logs.go:276] 1 containers: [e3c3c6294347]
	I0311 13:59:05.145001    4147 logs.go:123] Gathering logs for kube-scheduler [5f3d696666c1] ...
	I0311 13:59:05.145006    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f3d696666c1"
	I0311 13:59:05.159631    4147 logs.go:123] Gathering logs for kube-proxy [bff0a9595bb6] ...
	I0311 13:59:05.159641    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff0a9595bb6"
	I0311 13:59:05.171417    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:59:05.171427    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:59:05.194934    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:59:05.194945    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:59:05.206187    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:59:05.206197    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:59:05.243806    4147 logs.go:123] Gathering logs for etcd [c78c5ca8b4ac] ...
	I0311 13:59:05.243819    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78c5ca8b4ac"
	I0311 13:59:05.257877    4147 logs.go:123] Gathering logs for coredns [9472bc52aa3f] ...
	I0311 13:59:05.257893    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9472bc52aa3f"
	I0311 13:59:05.270283    4147 logs.go:123] Gathering logs for coredns [2dd7a5b4c30b] ...
	I0311 13:59:05.270309    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd7a5b4c30b"
	I0311 13:59:05.282845    4147 logs.go:123] Gathering logs for kube-controller-manager [08875e3858c8] ...
	I0311 13:59:05.286135    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08875e3858c8"
	I0311 13:59:05.307888    4147 logs.go:123] Gathering logs for storage-provisioner [e3c3c6294347] ...
	I0311 13:59:05.307900    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3c3c6294347"
	I0311 13:59:05.319886    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:59:05.319900    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:59:05.359030    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:59:05.359047    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:59:05.363605    4147 logs.go:123] Gathering logs for kube-apiserver [cd6426b65374] ...
	I0311 13:59:05.363611    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6426b65374"
	I0311 13:59:07.878934    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:59:12.881191    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:59:12.881430    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:59:12.899735    4147 logs.go:276] 1 containers: [cd6426b65374]
	I0311 13:59:12.899831    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:59:12.914859    4147 logs.go:276] 1 containers: [c78c5ca8b4ac]
	I0311 13:59:12.914930    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:59:12.929928    4147 logs.go:276] 2 containers: [2dd7a5b4c30b 9472bc52aa3f]
	I0311 13:59:12.930005    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:59:12.941165    4147 logs.go:276] 1 containers: [5f3d696666c1]
	I0311 13:59:12.941228    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:59:12.952287    4147 logs.go:276] 1 containers: [bff0a9595bb6]
	I0311 13:59:12.952347    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:59:12.962832    4147 logs.go:276] 1 containers: [08875e3858c8]
	I0311 13:59:12.962905    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:59:12.972269    4147 logs.go:276] 0 containers: []
	W0311 13:59:12.972283    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:59:12.972336    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:59:12.985956    4147 logs.go:276] 1 containers: [e3c3c6294347]
	I0311 13:59:12.985972    4147 logs.go:123] Gathering logs for kube-proxy [bff0a9595bb6] ...
	I0311 13:59:12.985979    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff0a9595bb6"
	I0311 13:59:12.997756    4147 logs.go:123] Gathering logs for kube-controller-manager [08875e3858c8] ...
	I0311 13:59:12.997769    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08875e3858c8"
	I0311 13:59:13.016368    4147 logs.go:123] Gathering logs for storage-provisioner [e3c3c6294347] ...
	I0311 13:59:13.016379    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3c3c6294347"
	I0311 13:59:13.027261    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:59:13.027276    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:59:13.038373    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:59:13.038383    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:59:13.043105    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:59:13.043111    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:59:13.077806    4147 logs.go:123] Gathering logs for kube-apiserver [cd6426b65374] ...
	I0311 13:59:13.077817    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6426b65374"
	I0311 13:59:13.092433    4147 logs.go:123] Gathering logs for coredns [9472bc52aa3f] ...
	I0311 13:59:13.092442    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9472bc52aa3f"
	I0311 13:59:13.105731    4147 logs.go:123] Gathering logs for kube-scheduler [5f3d696666c1] ...
	I0311 13:59:13.105745    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f3d696666c1"
	I0311 13:59:13.120725    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:59:13.120736    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:59:13.144725    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:59:13.144735    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:59:13.181215    4147 logs.go:123] Gathering logs for etcd [c78c5ca8b4ac] ...
	I0311 13:59:13.181225    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78c5ca8b4ac"
	I0311 13:59:13.195073    4147 logs.go:123] Gathering logs for coredns [2dd7a5b4c30b] ...
	I0311 13:59:13.195083    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd7a5b4c30b"
	I0311 13:59:15.708802    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:59:20.710901    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:59:20.711198    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:59:20.739652    4147 logs.go:276] 1 containers: [cd6426b65374]
	I0311 13:59:20.739779    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:59:20.757318    4147 logs.go:276] 1 containers: [c78c5ca8b4ac]
	I0311 13:59:20.757422    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:59:20.775321    4147 logs.go:276] 2 containers: [2dd7a5b4c30b 9472bc52aa3f]
	I0311 13:59:20.775391    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:59:20.786967    4147 logs.go:276] 1 containers: [5f3d696666c1]
	I0311 13:59:20.787034    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:59:20.798038    4147 logs.go:276] 1 containers: [bff0a9595bb6]
	I0311 13:59:20.798114    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:59:20.808615    4147 logs.go:276] 1 containers: [08875e3858c8]
	I0311 13:59:20.808685    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:59:20.818857    4147 logs.go:276] 0 containers: []
	W0311 13:59:20.818866    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:59:20.818923    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:59:20.829088    4147 logs.go:276] 1 containers: [e3c3c6294347]
	I0311 13:59:20.829104    4147 logs.go:123] Gathering logs for coredns [9472bc52aa3f] ...
	I0311 13:59:20.829109    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9472bc52aa3f"
	I0311 13:59:20.840755    4147 logs.go:123] Gathering logs for kube-controller-manager [08875e3858c8] ...
	I0311 13:59:20.840766    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08875e3858c8"
	I0311 13:59:20.859485    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:59:20.859500    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:59:20.897974    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:59:20.897983    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:59:20.933838    4147 logs.go:123] Gathering logs for kube-apiserver [cd6426b65374] ...
	I0311 13:59:20.933850    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6426b65374"
	I0311 13:59:20.948364    4147 logs.go:123] Gathering logs for etcd [c78c5ca8b4ac] ...
	I0311 13:59:20.948376    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78c5ca8b4ac"
	I0311 13:59:20.962142    4147 logs.go:123] Gathering logs for storage-provisioner [e3c3c6294347] ...
	I0311 13:59:20.962156    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3c3c6294347"
	I0311 13:59:20.973361    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:59:20.973375    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:59:20.998366    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:59:20.998375    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:59:21.009315    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:59:21.009326    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:59:21.014039    4147 logs.go:123] Gathering logs for coredns [2dd7a5b4c30b] ...
	I0311 13:59:21.014046    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd7a5b4c30b"
	I0311 13:59:21.025524    4147 logs.go:123] Gathering logs for kube-scheduler [5f3d696666c1] ...
	I0311 13:59:21.025537    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f3d696666c1"
	I0311 13:59:21.041168    4147 logs.go:123] Gathering logs for kube-proxy [bff0a9595bb6] ...
	I0311 13:59:21.041179    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff0a9595bb6"
	I0311 13:59:23.555216    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:59:28.555773    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:59:28.556052    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:59:28.585373    4147 logs.go:276] 1 containers: [cd6426b65374]
	I0311 13:59:28.585500    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:59:28.603276    4147 logs.go:276] 1 containers: [c78c5ca8b4ac]
	I0311 13:59:28.603380    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:59:28.617919    4147 logs.go:276] 2 containers: [2dd7a5b4c30b 9472bc52aa3f]
	I0311 13:59:28.617985    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:59:28.629923    4147 logs.go:276] 1 containers: [5f3d696666c1]
	I0311 13:59:28.629985    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:59:28.640761    4147 logs.go:276] 1 containers: [bff0a9595bb6]
	I0311 13:59:28.640823    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:59:28.655496    4147 logs.go:276] 1 containers: [08875e3858c8]
	I0311 13:59:28.655555    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:59:28.670075    4147 logs.go:276] 0 containers: []
	W0311 13:59:28.670088    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:59:28.670150    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:59:28.680713    4147 logs.go:276] 1 containers: [e3c3c6294347]
	I0311 13:59:28.680731    4147 logs.go:123] Gathering logs for kube-scheduler [5f3d696666c1] ...
	I0311 13:59:28.680736    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f3d696666c1"
	I0311 13:59:28.696475    4147 logs.go:123] Gathering logs for storage-provisioner [e3c3c6294347] ...
	I0311 13:59:28.696485    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3c3c6294347"
	I0311 13:59:28.708649    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:59:28.708660    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:59:28.721749    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:59:28.721761    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:59:28.761549    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:59:28.761560    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:59:28.802533    4147 logs.go:123] Gathering logs for kube-apiserver [cd6426b65374] ...
	I0311 13:59:28.802546    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6426b65374"
	I0311 13:59:28.816711    4147 logs.go:123] Gathering logs for etcd [c78c5ca8b4ac] ...
	I0311 13:59:28.816721    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78c5ca8b4ac"
	I0311 13:59:28.831296    4147 logs.go:123] Gathering logs for kube-controller-manager [08875e3858c8] ...
	I0311 13:59:28.831308    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08875e3858c8"
	I0311 13:59:28.849286    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:59:28.849298    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:59:28.873062    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:59:28.873075    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:59:28.877688    4147 logs.go:123] Gathering logs for coredns [2dd7a5b4c30b] ...
	I0311 13:59:28.877697    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd7a5b4c30b"
	I0311 13:59:28.889190    4147 logs.go:123] Gathering logs for coredns [9472bc52aa3f] ...
	I0311 13:59:28.889201    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9472bc52aa3f"
	I0311 13:59:28.900663    4147 logs.go:123] Gathering logs for kube-proxy [bff0a9595bb6] ...
	I0311 13:59:28.900674    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff0a9595bb6"
	I0311 13:59:31.417678    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:59:36.419864    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:59:36.420065    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:59:36.439981    4147 logs.go:276] 1 containers: [cd6426b65374]
	I0311 13:59:36.440092    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:59:36.454422    4147 logs.go:276] 1 containers: [c78c5ca8b4ac]
	I0311 13:59:36.454497    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:59:36.465777    4147 logs.go:276] 2 containers: [2dd7a5b4c30b 9472bc52aa3f]
	I0311 13:59:36.465846    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:59:36.475928    4147 logs.go:276] 1 containers: [5f3d696666c1]
	I0311 13:59:36.475997    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:59:36.487081    4147 logs.go:276] 1 containers: [bff0a9595bb6]
	I0311 13:59:36.487151    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:59:36.502714    4147 logs.go:276] 1 containers: [08875e3858c8]
	I0311 13:59:36.502781    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:59:36.512900    4147 logs.go:276] 0 containers: []
	W0311 13:59:36.512914    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:59:36.512975    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:59:36.523490    4147 logs.go:276] 1 containers: [e3c3c6294347]
	I0311 13:59:36.523504    4147 logs.go:123] Gathering logs for etcd [c78c5ca8b4ac] ...
	I0311 13:59:36.523509    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78c5ca8b4ac"
	I0311 13:59:36.543062    4147 logs.go:123] Gathering logs for kube-scheduler [5f3d696666c1] ...
	I0311 13:59:36.543072    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f3d696666c1"
	I0311 13:59:36.557841    4147 logs.go:123] Gathering logs for kube-controller-manager [08875e3858c8] ...
	I0311 13:59:36.557850    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08875e3858c8"
	I0311 13:59:36.575294    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:59:36.575304    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:59:36.614288    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:59:36.614299    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:59:36.618835    4147 logs.go:123] Gathering logs for kube-apiserver [cd6426b65374] ...
	I0311 13:59:36.618846    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6426b65374"
	I0311 13:59:36.633405    4147 logs.go:123] Gathering logs for kube-proxy [bff0a9595bb6] ...
	I0311 13:59:36.633418    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff0a9595bb6"
	I0311 13:59:36.644720    4147 logs.go:123] Gathering logs for storage-provisioner [e3c3c6294347] ...
	I0311 13:59:36.644731    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3c3c6294347"
	I0311 13:59:36.656035    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:59:36.656044    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:59:36.679327    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:59:36.679357    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:59:36.691901    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:59:36.691911    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:59:36.726636    4147 logs.go:123] Gathering logs for coredns [2dd7a5b4c30b] ...
	I0311 13:59:36.726647    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd7a5b4c30b"
	I0311 13:59:36.739942    4147 logs.go:123] Gathering logs for coredns [9472bc52aa3f] ...
	I0311 13:59:36.739952    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9472bc52aa3f"
	I0311 13:59:39.253110    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:59:44.255309    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:59:44.255668    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:59:44.284679    4147 logs.go:276] 1 containers: [cd6426b65374]
	I0311 13:59:44.284802    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:59:44.302872    4147 logs.go:276] 1 containers: [c78c5ca8b4ac]
	I0311 13:59:44.302959    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:59:44.317245    4147 logs.go:276] 2 containers: [2dd7a5b4c30b 9472bc52aa3f]
	I0311 13:59:44.317319    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:59:44.328559    4147 logs.go:276] 1 containers: [5f3d696666c1]
	I0311 13:59:44.328627    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:59:44.339709    4147 logs.go:276] 1 containers: [bff0a9595bb6]
	I0311 13:59:44.339784    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:59:44.350437    4147 logs.go:276] 1 containers: [08875e3858c8]
	I0311 13:59:44.350503    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:59:44.361105    4147 logs.go:276] 0 containers: []
	W0311 13:59:44.361115    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:59:44.361168    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:59:44.372269    4147 logs.go:276] 1 containers: [e3c3c6294347]
	I0311 13:59:44.372285    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:59:44.372291    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:59:44.377082    4147 logs.go:123] Gathering logs for etcd [c78c5ca8b4ac] ...
	I0311 13:59:44.377090    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78c5ca8b4ac"
	I0311 13:59:44.395323    4147 logs.go:123] Gathering logs for coredns [2dd7a5b4c30b] ...
	I0311 13:59:44.395333    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd7a5b4c30b"
	I0311 13:59:44.406966    4147 logs.go:123] Gathering logs for kube-proxy [bff0a9595bb6] ...
	I0311 13:59:44.406980    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff0a9595bb6"
	I0311 13:59:44.418908    4147 logs.go:123] Gathering logs for kube-controller-manager [08875e3858c8] ...
	I0311 13:59:44.418918    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08875e3858c8"
	I0311 13:59:44.436597    4147 logs.go:123] Gathering logs for storage-provisioner [e3c3c6294347] ...
	I0311 13:59:44.436611    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3c3c6294347"
	I0311 13:59:44.448043    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:59:44.448059    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:59:44.460766    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:59:44.460777    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:59:44.497893    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:59:44.497901    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:59:44.530928    4147 logs.go:123] Gathering logs for kube-apiserver [cd6426b65374] ...
	I0311 13:59:44.530937    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6426b65374"
	I0311 13:59:44.546586    4147 logs.go:123] Gathering logs for coredns [9472bc52aa3f] ...
	I0311 13:59:44.546597    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9472bc52aa3f"
	I0311 13:59:44.557974    4147 logs.go:123] Gathering logs for kube-scheduler [5f3d696666c1] ...
	I0311 13:59:44.557988    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f3d696666c1"
	I0311 13:59:44.572872    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:59:44.572884    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:59:47.099987    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:59:52.100236    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:59:52.100380    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:59:52.112145    4147 logs.go:276] 1 containers: [cd6426b65374]
	I0311 13:59:52.112225    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:59:52.122631    4147 logs.go:276] 1 containers: [c78c5ca8b4ac]
	I0311 13:59:52.122702    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:59:52.132997    4147 logs.go:276] 2 containers: [2dd7a5b4c30b 9472bc52aa3f]
	I0311 13:59:52.133067    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:59:52.143793    4147 logs.go:276] 1 containers: [5f3d696666c1]
	I0311 13:59:52.143865    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:59:52.153787    4147 logs.go:276] 1 containers: [bff0a9595bb6]
	I0311 13:59:52.153858    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:59:52.166956    4147 logs.go:276] 1 containers: [08875e3858c8]
	I0311 13:59:52.167022    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:59:52.177921    4147 logs.go:276] 0 containers: []
	W0311 13:59:52.177932    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:59:52.177997    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 13:59:52.188039    4147 logs.go:276] 1 containers: [e3c3c6294347]
	I0311 13:59:52.188053    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 13:59:52.188059    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 13:59:52.193042    4147 logs.go:123] Gathering logs for kube-apiserver [cd6426b65374] ...
	I0311 13:59:52.193050    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6426b65374"
	I0311 13:59:52.207393    4147 logs.go:123] Gathering logs for etcd [c78c5ca8b4ac] ...
	I0311 13:59:52.207404    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78c5ca8b4ac"
	I0311 13:59:52.221187    4147 logs.go:123] Gathering logs for kube-controller-manager [08875e3858c8] ...
	I0311 13:59:52.221199    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08875e3858c8"
	I0311 13:59:52.240383    4147 logs.go:123] Gathering logs for container status ...
	I0311 13:59:52.240397    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 13:59:52.252364    4147 logs.go:123] Gathering logs for kube-proxy [bff0a9595bb6] ...
	I0311 13:59:52.252375    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff0a9595bb6"
	I0311 13:59:52.264179    4147 logs.go:123] Gathering logs for storage-provisioner [e3c3c6294347] ...
	I0311 13:59:52.264191    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3c3c6294347"
	I0311 13:59:52.280253    4147 logs.go:123] Gathering logs for Docker ...
	I0311 13:59:52.280264    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 13:59:52.304432    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 13:59:52.304440    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 13:59:52.341484    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 13:59:52.341492    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 13:59:52.376048    4147 logs.go:123] Gathering logs for coredns [2dd7a5b4c30b] ...
	I0311 13:59:52.376059    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd7a5b4c30b"
	I0311 13:59:52.387688    4147 logs.go:123] Gathering logs for coredns [9472bc52aa3f] ...
	I0311 13:59:52.387698    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9472bc52aa3f"
	I0311 13:59:52.401221    4147 logs.go:123] Gathering logs for kube-scheduler [5f3d696666c1] ...
	I0311 13:59:52.401231    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f3d696666c1"
	I0311 13:59:54.917643    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 13:59:59.918065    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 13:59:59.918175    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 13:59:59.929867    4147 logs.go:276] 1 containers: [cd6426b65374]
	I0311 13:59:59.929940    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 13:59:59.940799    4147 logs.go:276] 1 containers: [c78c5ca8b4ac]
	I0311 13:59:59.940864    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 13:59:59.955844    4147 logs.go:276] 2 containers: [2dd7a5b4c30b 9472bc52aa3f]
	I0311 13:59:59.955918    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 13:59:59.966871    4147 logs.go:276] 1 containers: [5f3d696666c1]
	I0311 13:59:59.966938    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 13:59:59.976900    4147 logs.go:276] 1 containers: [bff0a9595bb6]
	I0311 13:59:59.976972    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 13:59:59.987851    4147 logs.go:276] 1 containers: [08875e3858c8]
	I0311 13:59:59.987949    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 13:59:59.998821    4147 logs.go:276] 0 containers: []
	W0311 13:59:59.998833    4147 logs.go:278] No container was found matching "kindnet"
	I0311 13:59:59.998893    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:00:00.009236    4147 logs.go:276] 1 containers: [e3c3c6294347]
	I0311 14:00:00.009250    4147 logs.go:123] Gathering logs for coredns [2dd7a5b4c30b] ...
	I0311 14:00:00.009257    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd7a5b4c30b"
	I0311 14:00:00.022886    4147 logs.go:123] Gathering logs for kube-proxy [bff0a9595bb6] ...
	I0311 14:00:00.022899    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff0a9595bb6"
	I0311 14:00:00.034479    4147 logs.go:123] Gathering logs for storage-provisioner [e3c3c6294347] ...
	I0311 14:00:00.034490    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3c3c6294347"
	I0311 14:00:00.046122    4147 logs.go:123] Gathering logs for Docker ...
	I0311 14:00:00.046132    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:00:00.069257    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 14:00:00.069266    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:00:00.073159    4147 logs.go:123] Gathering logs for etcd [c78c5ca8b4ac] ...
	I0311 14:00:00.073165    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78c5ca8b4ac"
	I0311 14:00:00.090036    4147 logs.go:123] Gathering logs for kube-apiserver [cd6426b65374] ...
	I0311 14:00:00.090047    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6426b65374"
	I0311 14:00:00.105089    4147 logs.go:123] Gathering logs for coredns [9472bc52aa3f] ...
	I0311 14:00:00.105099    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9472bc52aa3f"
	I0311 14:00:00.117387    4147 logs.go:123] Gathering logs for kube-scheduler [5f3d696666c1] ...
	I0311 14:00:00.117397    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f3d696666c1"
	I0311 14:00:00.132134    4147 logs.go:123] Gathering logs for kube-controller-manager [08875e3858c8] ...
	I0311 14:00:00.132145    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08875e3858c8"
	I0311 14:00:00.149060    4147 logs.go:123] Gathering logs for container status ...
	I0311 14:00:00.149070    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:00:00.160114    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 14:00:00.160123    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:00:00.197466    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:00:00.197476    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:00:02.732621    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:00:07.734068    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:00:07.734279    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:00:07.751933    4147 logs.go:276] 1 containers: [cd6426b65374]
	I0311 14:00:07.752024    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:00:07.764775    4147 logs.go:276] 1 containers: [c78c5ca8b4ac]
	I0311 14:00:07.764849    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:00:07.775625    4147 logs.go:276] 2 containers: [2dd7a5b4c30b 9472bc52aa3f]
	I0311 14:00:07.775698    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:00:07.786276    4147 logs.go:276] 1 containers: [5f3d696666c1]
	I0311 14:00:07.786336    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:00:07.796791    4147 logs.go:276] 1 containers: [bff0a9595bb6]
	I0311 14:00:07.796861    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:00:07.807791    4147 logs.go:276] 1 containers: [08875e3858c8]
	I0311 14:00:07.807855    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:00:07.818663    4147 logs.go:276] 0 containers: []
	W0311 14:00:07.818673    4147 logs.go:278] No container was found matching "kindnet"
	I0311 14:00:07.818737    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:00:07.829442    4147 logs.go:276] 1 containers: [e3c3c6294347]
	I0311 14:00:07.829458    4147 logs.go:123] Gathering logs for container status ...
	I0311 14:00:07.829466    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:00:07.841300    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 14:00:07.841312    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:00:07.845664    4147 logs.go:123] Gathering logs for etcd [c78c5ca8b4ac] ...
	I0311 14:00:07.845674    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78c5ca8b4ac"
	I0311 14:00:07.863796    4147 logs.go:123] Gathering logs for coredns [2dd7a5b4c30b] ...
	I0311 14:00:07.863807    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd7a5b4c30b"
	I0311 14:00:07.875794    4147 logs.go:123] Gathering logs for kube-scheduler [5f3d696666c1] ...
	I0311 14:00:07.875805    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f3d696666c1"
	I0311 14:00:07.890448    4147 logs.go:123] Gathering logs for kube-proxy [bff0a9595bb6] ...
	I0311 14:00:07.890459    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff0a9595bb6"
	I0311 14:00:07.902186    4147 logs.go:123] Gathering logs for kube-controller-manager [08875e3858c8] ...
	I0311 14:00:07.902196    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08875e3858c8"
	I0311 14:00:07.923958    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 14:00:07.923968    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:00:07.961284    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:00:07.961294    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:00:07.997191    4147 logs.go:123] Gathering logs for kube-apiserver [cd6426b65374] ...
	I0311 14:00:07.997203    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6426b65374"
	I0311 14:00:08.011609    4147 logs.go:123] Gathering logs for coredns [9472bc52aa3f] ...
	I0311 14:00:08.011625    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9472bc52aa3f"
	I0311 14:00:08.026904    4147 logs.go:123] Gathering logs for storage-provisioner [e3c3c6294347] ...
	I0311 14:00:08.026917    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3c3c6294347"
	I0311 14:00:08.038222    4147 logs.go:123] Gathering logs for Docker ...
	I0311 14:00:08.038236    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:00:10.563200    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:00:15.565340    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:00:15.565454    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:00:15.578191    4147 logs.go:276] 1 containers: [cd6426b65374]
	I0311 14:00:15.578264    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:00:15.588343    4147 logs.go:276] 1 containers: [c78c5ca8b4ac]
	I0311 14:00:15.588416    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:00:15.599679    4147 logs.go:276] 4 containers: [6459803421c7 2ea39dc73261 2dd7a5b4c30b 9472bc52aa3f]
	I0311 14:00:15.599757    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:00:15.609695    4147 logs.go:276] 1 containers: [5f3d696666c1]
	I0311 14:00:15.609762    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:00:15.621497    4147 logs.go:276] 1 containers: [bff0a9595bb6]
	I0311 14:00:15.621560    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:00:15.632059    4147 logs.go:276] 1 containers: [08875e3858c8]
	I0311 14:00:15.632132    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:00:15.642431    4147 logs.go:276] 0 containers: []
	W0311 14:00:15.642443    4147 logs.go:278] No container was found matching "kindnet"
	I0311 14:00:15.642498    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:00:15.652631    4147 logs.go:276] 1 containers: [e3c3c6294347]
	I0311 14:00:15.652649    4147 logs.go:123] Gathering logs for kube-apiserver [cd6426b65374] ...
	I0311 14:00:15.652654    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6426b65374"
	I0311 14:00:15.666908    4147 logs.go:123] Gathering logs for coredns [2dd7a5b4c30b] ...
	I0311 14:00:15.666919    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd7a5b4c30b"
	I0311 14:00:15.678688    4147 logs.go:123] Gathering logs for kube-controller-manager [08875e3858c8] ...
	I0311 14:00:15.678702    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08875e3858c8"
	I0311 14:00:15.697082    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 14:00:15.697093    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:00:15.735602    4147 logs.go:123] Gathering logs for coredns [6459803421c7] ...
	I0311 14:00:15.735615    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6459803421c7"
	I0311 14:00:15.751518    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 14:00:15.751529    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:00:15.756413    4147 logs.go:123] Gathering logs for coredns [9472bc52aa3f] ...
	I0311 14:00:15.756423    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9472bc52aa3f"
	I0311 14:00:15.773801    4147 logs.go:123] Gathering logs for kube-scheduler [5f3d696666c1] ...
	I0311 14:00:15.773812    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f3d696666c1"
	I0311 14:00:15.794251    4147 logs.go:123] Gathering logs for kube-proxy [bff0a9595bb6] ...
	I0311 14:00:15.794261    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff0a9595bb6"
	I0311 14:00:15.806525    4147 logs.go:123] Gathering logs for storage-provisioner [e3c3c6294347] ...
	I0311 14:00:15.806536    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3c3c6294347"
	I0311 14:00:15.817518    4147 logs.go:123] Gathering logs for Docker ...
	I0311 14:00:15.817527    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:00:15.842305    4147 logs.go:123] Gathering logs for container status ...
	I0311 14:00:15.842314    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:00:15.855757    4147 logs.go:123] Gathering logs for coredns [2ea39dc73261] ...
	I0311 14:00:15.855766    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea39dc73261"
	I0311 14:00:15.867407    4147 logs.go:123] Gathering logs for etcd [c78c5ca8b4ac] ...
	I0311 14:00:15.867418    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78c5ca8b4ac"
	I0311 14:00:15.883773    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:00:15.883788    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:00:18.425043    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:00:23.427173    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:00:23.427398    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:00:23.448779    4147 logs.go:276] 1 containers: [cd6426b65374]
	I0311 14:00:23.448874    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:00:23.463355    4147 logs.go:276] 1 containers: [c78c5ca8b4ac]
	I0311 14:00:23.463435    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:00:23.475800    4147 logs.go:276] 4 containers: [6459803421c7 2ea39dc73261 2dd7a5b4c30b 9472bc52aa3f]
	I0311 14:00:23.475877    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:00:23.486063    4147 logs.go:276] 1 containers: [5f3d696666c1]
	I0311 14:00:23.486131    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:00:23.496249    4147 logs.go:276] 1 containers: [bff0a9595bb6]
	I0311 14:00:23.496316    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:00:23.507582    4147 logs.go:276] 1 containers: [08875e3858c8]
	I0311 14:00:23.507659    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:00:23.518188    4147 logs.go:276] 0 containers: []
	W0311 14:00:23.518200    4147 logs.go:278] No container was found matching "kindnet"
	I0311 14:00:23.518257    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:00:23.528380    4147 logs.go:276] 1 containers: [e3c3c6294347]
	I0311 14:00:23.528396    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 14:00:23.528401    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:00:23.565342    4147 logs.go:123] Gathering logs for kube-controller-manager [08875e3858c8] ...
	I0311 14:00:23.565349    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08875e3858c8"
	I0311 14:00:23.582400    4147 logs.go:123] Gathering logs for container status ...
	I0311 14:00:23.582415    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:00:23.595170    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:00:23.595181    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:00:23.630212    4147 logs.go:123] Gathering logs for kube-apiserver [cd6426b65374] ...
	I0311 14:00:23.630223    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6426b65374"
	I0311 14:00:23.644443    4147 logs.go:123] Gathering logs for coredns [9472bc52aa3f] ...
	I0311 14:00:23.644456    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9472bc52aa3f"
	I0311 14:00:23.656864    4147 logs.go:123] Gathering logs for kube-proxy [bff0a9595bb6] ...
	I0311 14:00:23.656877    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff0a9595bb6"
	I0311 14:00:23.668772    4147 logs.go:123] Gathering logs for storage-provisioner [e3c3c6294347] ...
	I0311 14:00:23.668782    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3c3c6294347"
	I0311 14:00:23.680552    4147 logs.go:123] Gathering logs for coredns [6459803421c7] ...
	I0311 14:00:23.680561    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6459803421c7"
	I0311 14:00:23.693352    4147 logs.go:123] Gathering logs for coredns [2dd7a5b4c30b] ...
	I0311 14:00:23.693362    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd7a5b4c30b"
	I0311 14:00:23.705830    4147 logs.go:123] Gathering logs for kube-scheduler [5f3d696666c1] ...
	I0311 14:00:23.705841    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f3d696666c1"
	I0311 14:00:23.729287    4147 logs.go:123] Gathering logs for Docker ...
	I0311 14:00:23.729296    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:00:23.754001    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 14:00:23.754013    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:00:23.758314    4147 logs.go:123] Gathering logs for etcd [c78c5ca8b4ac] ...
	I0311 14:00:23.758323    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78c5ca8b4ac"
	I0311 14:00:23.776977    4147 logs.go:123] Gathering logs for coredns [2ea39dc73261] ...
	I0311 14:00:23.776988    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea39dc73261"
	I0311 14:00:26.290607    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:00:31.292953    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:00:31.293270    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:00:31.320947    4147 logs.go:276] 1 containers: [cd6426b65374]
	I0311 14:00:31.321094    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:00:31.338591    4147 logs.go:276] 1 containers: [c78c5ca8b4ac]
	I0311 14:00:31.338679    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:00:31.351701    4147 logs.go:276] 4 containers: [6459803421c7 2ea39dc73261 2dd7a5b4c30b 9472bc52aa3f]
	I0311 14:00:31.351787    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:00:31.365142    4147 logs.go:276] 1 containers: [5f3d696666c1]
	I0311 14:00:31.365214    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:00:31.375519    4147 logs.go:276] 1 containers: [bff0a9595bb6]
	I0311 14:00:31.375584    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:00:31.386022    4147 logs.go:276] 1 containers: [08875e3858c8]
	I0311 14:00:31.386094    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:00:31.395939    4147 logs.go:276] 0 containers: []
	W0311 14:00:31.395950    4147 logs.go:278] No container was found matching "kindnet"
	I0311 14:00:31.396004    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:00:31.406398    4147 logs.go:276] 1 containers: [e3c3c6294347]
	I0311 14:00:31.406414    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 14:00:31.406419    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:00:31.444571    4147 logs.go:123] Gathering logs for kube-controller-manager [08875e3858c8] ...
	I0311 14:00:31.444582    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08875e3858c8"
	I0311 14:00:31.463458    4147 logs.go:123] Gathering logs for Docker ...
	I0311 14:00:31.463470    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:00:31.488222    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 14:00:31.488232    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:00:31.492406    4147 logs.go:123] Gathering logs for etcd [c78c5ca8b4ac] ...
	I0311 14:00:31.492415    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78c5ca8b4ac"
	I0311 14:00:31.505942    4147 logs.go:123] Gathering logs for coredns [2ea39dc73261] ...
	I0311 14:00:31.505956    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea39dc73261"
	I0311 14:00:31.518022    4147 logs.go:123] Gathering logs for kube-scheduler [5f3d696666c1] ...
	I0311 14:00:31.518032    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f3d696666c1"
	I0311 14:00:31.533095    4147 logs.go:123] Gathering logs for kube-proxy [bff0a9595bb6] ...
	I0311 14:00:31.533106    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff0a9595bb6"
	I0311 14:00:31.546836    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:00:31.546847    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:00:31.581794    4147 logs.go:123] Gathering logs for storage-provisioner [e3c3c6294347] ...
	I0311 14:00:31.581807    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3c3c6294347"
	I0311 14:00:31.593495    4147 logs.go:123] Gathering logs for kube-apiserver [cd6426b65374] ...
	I0311 14:00:31.593506    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6426b65374"
	I0311 14:00:31.611275    4147 logs.go:123] Gathering logs for coredns [6459803421c7] ...
	I0311 14:00:31.611287    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6459803421c7"
	I0311 14:00:31.622307    4147 logs.go:123] Gathering logs for coredns [2dd7a5b4c30b] ...
	I0311 14:00:31.622321    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd7a5b4c30b"
	I0311 14:00:31.633884    4147 logs.go:123] Gathering logs for coredns [9472bc52aa3f] ...
	I0311 14:00:31.633898    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9472bc52aa3f"
	I0311 14:00:31.646436    4147 logs.go:123] Gathering logs for container status ...
	I0311 14:00:31.646449    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:00:34.160622    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:00:39.162495    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:00:39.162721    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:00:39.178442    4147 logs.go:276] 1 containers: [cd6426b65374]
	I0311 14:00:39.178530    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:00:39.190794    4147 logs.go:276] 1 containers: [c78c5ca8b4ac]
	I0311 14:00:39.190867    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:00:39.203564    4147 logs.go:276] 4 containers: [6459803421c7 2ea39dc73261 2dd7a5b4c30b 9472bc52aa3f]
	I0311 14:00:39.203640    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:00:39.214314    4147 logs.go:276] 1 containers: [5f3d696666c1]
	I0311 14:00:39.214384    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:00:39.225248    4147 logs.go:276] 1 containers: [bff0a9595bb6]
	I0311 14:00:39.225322    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:00:39.242328    4147 logs.go:276] 1 containers: [08875e3858c8]
	I0311 14:00:39.242395    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:00:39.252782    4147 logs.go:276] 0 containers: []
	W0311 14:00:39.252794    4147 logs.go:278] No container was found matching "kindnet"
	I0311 14:00:39.252852    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:00:39.263507    4147 logs.go:276] 1 containers: [e3c3c6294347]
	I0311 14:00:39.263524    4147 logs.go:123] Gathering logs for container status ...
	I0311 14:00:39.263530    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:00:39.275109    4147 logs.go:123] Gathering logs for kube-apiserver [cd6426b65374] ...
	I0311 14:00:39.275119    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6426b65374"
	I0311 14:00:39.289185    4147 logs.go:123] Gathering logs for coredns [6459803421c7] ...
	I0311 14:00:39.289198    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6459803421c7"
	I0311 14:00:39.300926    4147 logs.go:123] Gathering logs for coredns [2dd7a5b4c30b] ...
	I0311 14:00:39.300937    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd7a5b4c30b"
	I0311 14:00:39.312524    4147 logs.go:123] Gathering logs for coredns [9472bc52aa3f] ...
	I0311 14:00:39.312536    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9472bc52aa3f"
	I0311 14:00:39.324561    4147 logs.go:123] Gathering logs for kube-scheduler [5f3d696666c1] ...
	I0311 14:00:39.324571    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f3d696666c1"
	I0311 14:00:39.347652    4147 logs.go:123] Gathering logs for kube-proxy [bff0a9595bb6] ...
	I0311 14:00:39.347663    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff0a9595bb6"
	I0311 14:00:39.364961    4147 logs.go:123] Gathering logs for storage-provisioner [e3c3c6294347] ...
	I0311 14:00:39.364971    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3c3c6294347"
	I0311 14:00:39.378576    4147 logs.go:123] Gathering logs for Docker ...
	I0311 14:00:39.378587    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:00:39.405800    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 14:00:39.405809    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:00:39.410356    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:00:39.410365    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:00:39.446244    4147 logs.go:123] Gathering logs for etcd [c78c5ca8b4ac] ...
	I0311 14:00:39.446255    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78c5ca8b4ac"
	I0311 14:00:39.460889    4147 logs.go:123] Gathering logs for kube-controller-manager [08875e3858c8] ...
	I0311 14:00:39.460902    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08875e3858c8"
	I0311 14:00:39.478781    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 14:00:39.478792    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:00:39.518842    4147 logs.go:123] Gathering logs for coredns [2ea39dc73261] ...
	I0311 14:00:39.518857    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea39dc73261"
	I0311 14:00:42.032915    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:00:47.034991    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:00:47.035102    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:00:47.046840    4147 logs.go:276] 1 containers: [cd6426b65374]
	I0311 14:00:47.046921    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:00:47.058111    4147 logs.go:276] 1 containers: [c78c5ca8b4ac]
	I0311 14:00:47.058172    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:00:47.068809    4147 logs.go:276] 4 containers: [6459803421c7 2ea39dc73261 2dd7a5b4c30b 9472bc52aa3f]
	I0311 14:00:47.068884    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:00:47.079480    4147 logs.go:276] 1 containers: [5f3d696666c1]
	I0311 14:00:47.079551    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:00:47.089970    4147 logs.go:276] 1 containers: [bff0a9595bb6]
	I0311 14:00:47.090038    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:00:47.105334    4147 logs.go:276] 1 containers: [08875e3858c8]
	I0311 14:00:47.105405    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:00:47.115461    4147 logs.go:276] 0 containers: []
	W0311 14:00:47.115474    4147 logs.go:278] No container was found matching "kindnet"
	I0311 14:00:47.115525    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:00:47.126199    4147 logs.go:276] 1 containers: [e3c3c6294347]
	I0311 14:00:47.126217    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:00:47.126223    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:00:47.161239    4147 logs.go:123] Gathering logs for coredns [2ea39dc73261] ...
	I0311 14:00:47.161253    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea39dc73261"
	I0311 14:00:47.172973    4147 logs.go:123] Gathering logs for kube-controller-manager [08875e3858c8] ...
	I0311 14:00:47.172986    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08875e3858c8"
	I0311 14:00:47.190153    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 14:00:47.190163    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:00:47.194738    4147 logs.go:123] Gathering logs for etcd [c78c5ca8b4ac] ...
	I0311 14:00:47.194744    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78c5ca8b4ac"
	I0311 14:00:47.215629    4147 logs.go:123] Gathering logs for coredns [6459803421c7] ...
	I0311 14:00:47.215642    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6459803421c7"
	I0311 14:00:47.227550    4147 logs.go:123] Gathering logs for coredns [9472bc52aa3f] ...
	I0311 14:00:47.227562    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9472bc52aa3f"
	I0311 14:00:47.242749    4147 logs.go:123] Gathering logs for kube-proxy [bff0a9595bb6] ...
	I0311 14:00:47.242762    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff0a9595bb6"
	I0311 14:00:47.255390    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 14:00:47.255401    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:00:47.293527    4147 logs.go:123] Gathering logs for coredns [2dd7a5b4c30b] ...
	I0311 14:00:47.293535    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd7a5b4c30b"
	I0311 14:00:47.305219    4147 logs.go:123] Gathering logs for kube-scheduler [5f3d696666c1] ...
	I0311 14:00:47.305233    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f3d696666c1"
	I0311 14:00:47.320598    4147 logs.go:123] Gathering logs for storage-provisioner [e3c3c6294347] ...
	I0311 14:00:47.320608    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3c3c6294347"
	I0311 14:00:47.332532    4147 logs.go:123] Gathering logs for Docker ...
	I0311 14:00:47.332542    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:00:47.355963    4147 logs.go:123] Gathering logs for kube-apiserver [cd6426b65374] ...
	I0311 14:00:47.355971    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6426b65374"
	I0311 14:00:47.369773    4147 logs.go:123] Gathering logs for container status ...
	I0311 14:00:47.369784    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:00:49.884236    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:00:54.886461    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:00:54.886665    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:00:54.909477    4147 logs.go:276] 1 containers: [cd6426b65374]
	I0311 14:00:54.909568    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:00:54.924241    4147 logs.go:276] 1 containers: [c78c5ca8b4ac]
	I0311 14:00:54.924317    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:00:54.936985    4147 logs.go:276] 4 containers: [6459803421c7 2ea39dc73261 2dd7a5b4c30b 9472bc52aa3f]
	I0311 14:00:54.937059    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:00:54.951299    4147 logs.go:276] 1 containers: [5f3d696666c1]
	I0311 14:00:54.951376    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:00:54.961995    4147 logs.go:276] 1 containers: [bff0a9595bb6]
	I0311 14:00:54.962070    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:00:54.972222    4147 logs.go:276] 1 containers: [08875e3858c8]
	I0311 14:00:54.972283    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:00:54.984040    4147 logs.go:276] 0 containers: []
	W0311 14:00:54.984051    4147 logs.go:278] No container was found matching "kindnet"
	I0311 14:00:54.984113    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:00:54.999872    4147 logs.go:276] 1 containers: [e3c3c6294347]
	I0311 14:00:54.999892    4147 logs.go:123] Gathering logs for coredns [6459803421c7] ...
	I0311 14:00:54.999898    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6459803421c7"
	I0311 14:00:55.017315    4147 logs.go:123] Gathering logs for kube-proxy [bff0a9595bb6] ...
	I0311 14:00:55.017326    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff0a9595bb6"
	I0311 14:00:55.032945    4147 logs.go:123] Gathering logs for Docker ...
	I0311 14:00:55.032954    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:00:55.057806    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 14:00:55.057814    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:00:55.096417    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:00:55.096436    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:00:55.131544    4147 logs.go:123] Gathering logs for kube-apiserver [cd6426b65374] ...
	I0311 14:00:55.131557    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6426b65374"
	I0311 14:00:55.152359    4147 logs.go:123] Gathering logs for coredns [9472bc52aa3f] ...
	I0311 14:00:55.152372    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9472bc52aa3f"
	I0311 14:00:55.164294    4147 logs.go:123] Gathering logs for kube-scheduler [5f3d696666c1] ...
	I0311 14:00:55.164305    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f3d696666c1"
	I0311 14:00:55.179095    4147 logs.go:123] Gathering logs for storage-provisioner [e3c3c6294347] ...
	I0311 14:00:55.179110    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3c3c6294347"
	I0311 14:00:55.190992    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 14:00:55.191004    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:00:55.195477    4147 logs.go:123] Gathering logs for etcd [c78c5ca8b4ac] ...
	I0311 14:00:55.195488    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78c5ca8b4ac"
	I0311 14:00:55.209075    4147 logs.go:123] Gathering logs for coredns [2dd7a5b4c30b] ...
	I0311 14:00:55.209085    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd7a5b4c30b"
	I0311 14:00:55.220483    4147 logs.go:123] Gathering logs for kube-controller-manager [08875e3858c8] ...
	I0311 14:00:55.220493    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08875e3858c8"
	I0311 14:00:55.238130    4147 logs.go:123] Gathering logs for container status ...
	I0311 14:00:55.238139    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:00:55.250066    4147 logs.go:123] Gathering logs for coredns [2ea39dc73261] ...
	I0311 14:00:55.250077    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea39dc73261"
	I0311 14:00:57.763773    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:01:02.766010    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:01:02.766246    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:01:02.790692    4147 logs.go:276] 1 containers: [cd6426b65374]
	I0311 14:01:02.790790    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:01:02.813184    4147 logs.go:276] 1 containers: [c78c5ca8b4ac]
	I0311 14:01:02.813264    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:01:02.825869    4147 logs.go:276] 4 containers: [6459803421c7 2ea39dc73261 2dd7a5b4c30b 9472bc52aa3f]
	I0311 14:01:02.825949    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:01:02.836259    4147 logs.go:276] 1 containers: [5f3d696666c1]
	I0311 14:01:02.836328    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:01:02.846223    4147 logs.go:276] 1 containers: [bff0a9595bb6]
	I0311 14:01:02.846284    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:01:02.861124    4147 logs.go:276] 1 containers: [08875e3858c8]
	I0311 14:01:02.861186    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:01:02.871325    4147 logs.go:276] 0 containers: []
	W0311 14:01:02.871337    4147 logs.go:278] No container was found matching "kindnet"
	I0311 14:01:02.871398    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:01:02.882282    4147 logs.go:276] 1 containers: [e3c3c6294347]
	I0311 14:01:02.882300    4147 logs.go:123] Gathering logs for Docker ...
	I0311 14:01:02.882305    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:01:02.906824    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:01:02.906836    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:01:02.941951    4147 logs.go:123] Gathering logs for kube-scheduler [5f3d696666c1] ...
	I0311 14:01:02.941962    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f3d696666c1"
	I0311 14:01:02.957173    4147 logs.go:123] Gathering logs for storage-provisioner [e3c3c6294347] ...
	I0311 14:01:02.957190    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3c3c6294347"
	I0311 14:01:02.969202    4147 logs.go:123] Gathering logs for etcd [c78c5ca8b4ac] ...
	I0311 14:01:02.969213    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78c5ca8b4ac"
	I0311 14:01:02.982998    4147 logs.go:123] Gathering logs for coredns [2ea39dc73261] ...
	I0311 14:01:02.983008    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea39dc73261"
	I0311 14:01:02.995608    4147 logs.go:123] Gathering logs for kube-controller-manager [08875e3858c8] ...
	I0311 14:01:02.995619    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08875e3858c8"
	I0311 14:01:03.013192    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 14:01:03.013202    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:01:03.017950    4147 logs.go:123] Gathering logs for coredns [9472bc52aa3f] ...
	I0311 14:01:03.017956    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9472bc52aa3f"
	I0311 14:01:03.029621    4147 logs.go:123] Gathering logs for kube-proxy [bff0a9595bb6] ...
	I0311 14:01:03.029631    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff0a9595bb6"
	I0311 14:01:03.041743    4147 logs.go:123] Gathering logs for coredns [2dd7a5b4c30b] ...
	I0311 14:01:03.041752    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd7a5b4c30b"
	I0311 14:01:03.057395    4147 logs.go:123] Gathering logs for container status ...
	I0311 14:01:03.057406    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:01:03.069407    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 14:01:03.069418    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:01:03.107798    4147 logs.go:123] Gathering logs for kube-apiserver [cd6426b65374] ...
	I0311 14:01:03.107806    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6426b65374"
	I0311 14:01:03.122670    4147 logs.go:123] Gathering logs for coredns [6459803421c7] ...
	I0311 14:01:03.122681    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6459803421c7"
	I0311 14:01:05.636119    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:01:10.638240    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:01:10.638441    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:01:10.655086    4147 logs.go:276] 1 containers: [cd6426b65374]
	I0311 14:01:10.655162    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:01:10.665171    4147 logs.go:276] 1 containers: [c78c5ca8b4ac]
	I0311 14:01:10.665240    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:01:10.675749    4147 logs.go:276] 4 containers: [6459803421c7 2ea39dc73261 2dd7a5b4c30b 9472bc52aa3f]
	I0311 14:01:10.675827    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:01:10.687073    4147 logs.go:276] 1 containers: [5f3d696666c1]
	I0311 14:01:10.687141    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:01:10.697304    4147 logs.go:276] 1 containers: [bff0a9595bb6]
	I0311 14:01:10.697375    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:01:10.707991    4147 logs.go:276] 1 containers: [08875e3858c8]
	I0311 14:01:10.708057    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:01:10.718954    4147 logs.go:276] 0 containers: []
	W0311 14:01:10.718966    4147 logs.go:278] No container was found matching "kindnet"
	I0311 14:01:10.719028    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:01:10.729401    4147 logs.go:276] 1 containers: [e3c3c6294347]
	I0311 14:01:10.729418    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 14:01:10.729423    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:01:10.734319    4147 logs.go:123] Gathering logs for coredns [2dd7a5b4c30b] ...
	I0311 14:01:10.734326    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd7a5b4c30b"
	I0311 14:01:10.745905    4147 logs.go:123] Gathering logs for container status ...
	I0311 14:01:10.745915    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:01:10.757625    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 14:01:10.757635    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:01:10.795214    4147 logs.go:123] Gathering logs for coredns [2ea39dc73261] ...
	I0311 14:01:10.795224    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea39dc73261"
	I0311 14:01:10.806480    4147 logs.go:123] Gathering logs for kube-controller-manager [08875e3858c8] ...
	I0311 14:01:10.806490    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08875e3858c8"
	I0311 14:01:10.824041    4147 logs.go:123] Gathering logs for Docker ...
	I0311 14:01:10.824050    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:01:10.849105    4147 logs.go:123] Gathering logs for etcd [c78c5ca8b4ac] ...
	I0311 14:01:10.849116    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78c5ca8b4ac"
	I0311 14:01:10.863225    4147 logs.go:123] Gathering logs for coredns [9472bc52aa3f] ...
	I0311 14:01:10.863236    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9472bc52aa3f"
	I0311 14:01:10.875301    4147 logs.go:123] Gathering logs for kube-proxy [bff0a9595bb6] ...
	I0311 14:01:10.875311    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff0a9595bb6"
	I0311 14:01:10.887194    4147 logs.go:123] Gathering logs for storage-provisioner [e3c3c6294347] ...
	I0311 14:01:10.887204    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3c3c6294347"
	I0311 14:01:10.900295    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:01:10.900309    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:01:10.940353    4147 logs.go:123] Gathering logs for kube-apiserver [cd6426b65374] ...
	I0311 14:01:10.940363    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6426b65374"
	I0311 14:01:10.959250    4147 logs.go:123] Gathering logs for coredns [6459803421c7] ...
	I0311 14:01:10.959264    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6459803421c7"
	I0311 14:01:10.970555    4147 logs.go:123] Gathering logs for kube-scheduler [5f3d696666c1] ...
	I0311 14:01:10.970566    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f3d696666c1"
	I0311 14:01:13.487711    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:01:18.489902    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:01:18.490095    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:01:18.507312    4147 logs.go:276] 1 containers: [cd6426b65374]
	I0311 14:01:18.507402    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:01:18.520676    4147 logs.go:276] 1 containers: [c78c5ca8b4ac]
	I0311 14:01:18.520744    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:01:18.532805    4147 logs.go:276] 4 containers: [6459803421c7 2ea39dc73261 2dd7a5b4c30b 9472bc52aa3f]
	I0311 14:01:18.532872    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:01:18.547457    4147 logs.go:276] 1 containers: [5f3d696666c1]
	I0311 14:01:18.547522    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:01:18.558122    4147 logs.go:276] 1 containers: [bff0a9595bb6]
	I0311 14:01:18.558189    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:01:18.569335    4147 logs.go:276] 1 containers: [08875e3858c8]
	I0311 14:01:18.569400    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:01:18.579279    4147 logs.go:276] 0 containers: []
	W0311 14:01:18.579290    4147 logs.go:278] No container was found matching "kindnet"
	I0311 14:01:18.579350    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:01:18.591404    4147 logs.go:276] 1 containers: [e3c3c6294347]
	I0311 14:01:18.591421    4147 logs.go:123] Gathering logs for kube-proxy [bff0a9595bb6] ...
	I0311 14:01:18.591427    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff0a9595bb6"
	I0311 14:01:18.603421    4147 logs.go:123] Gathering logs for Docker ...
	I0311 14:01:18.603432    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:01:18.628075    4147 logs.go:123] Gathering logs for container status ...
	I0311 14:01:18.628086    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:01:18.640417    4147 logs.go:123] Gathering logs for coredns [6459803421c7] ...
	I0311 14:01:18.640430    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6459803421c7"
	I0311 14:01:18.652363    4147 logs.go:123] Gathering logs for coredns [2ea39dc73261] ...
	I0311 14:01:18.652374    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea39dc73261"
	I0311 14:01:18.664017    4147 logs.go:123] Gathering logs for kube-controller-manager [08875e3858c8] ...
	I0311 14:01:18.664027    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08875e3858c8"
	I0311 14:01:18.681211    4147 logs.go:123] Gathering logs for storage-provisioner [e3c3c6294347] ...
	I0311 14:01:18.681221    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3c3c6294347"
	I0311 14:01:18.693500    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:01:18.693511    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:01:18.728572    4147 logs.go:123] Gathering logs for kube-apiserver [cd6426b65374] ...
	I0311 14:01:18.728581    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6426b65374"
	I0311 14:01:18.750777    4147 logs.go:123] Gathering logs for coredns [9472bc52aa3f] ...
	I0311 14:01:18.750790    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9472bc52aa3f"
	I0311 14:01:18.762538    4147 logs.go:123] Gathering logs for kube-scheduler [5f3d696666c1] ...
	I0311 14:01:18.762547    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f3d696666c1"
	I0311 14:01:18.777006    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 14:01:18.777017    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:01:18.814215    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 14:01:18.814223    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:01:18.818438    4147 logs.go:123] Gathering logs for etcd [c78c5ca8b4ac] ...
	I0311 14:01:18.818445    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78c5ca8b4ac"
	I0311 14:01:18.832662    4147 logs.go:123] Gathering logs for coredns [2dd7a5b4c30b] ...
	I0311 14:01:18.832673    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd7a5b4c30b"
	I0311 14:01:21.346462    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:01:26.348906    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:01:26.349092    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:01:26.371008    4147 logs.go:276] 1 containers: [cd6426b65374]
	I0311 14:01:26.371107    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:01:26.388992    4147 logs.go:276] 1 containers: [c78c5ca8b4ac]
	I0311 14:01:26.389074    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:01:26.401520    4147 logs.go:276] 4 containers: [6459803421c7 2ea39dc73261 2dd7a5b4c30b 9472bc52aa3f]
	I0311 14:01:26.401595    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:01:26.412790    4147 logs.go:276] 1 containers: [5f3d696666c1]
	I0311 14:01:26.412858    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:01:26.430935    4147 logs.go:276] 1 containers: [bff0a9595bb6]
	I0311 14:01:26.431002    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:01:26.442102    4147 logs.go:276] 1 containers: [08875e3858c8]
	I0311 14:01:26.442167    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:01:26.453042    4147 logs.go:276] 0 containers: []
	W0311 14:01:26.453054    4147 logs.go:278] No container was found matching "kindnet"
	I0311 14:01:26.453110    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:01:26.463415    4147 logs.go:276] 1 containers: [e3c3c6294347]
	I0311 14:01:26.463432    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:01:26.463438    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:01:26.498268    4147 logs.go:123] Gathering logs for etcd [c78c5ca8b4ac] ...
	I0311 14:01:26.498282    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78c5ca8b4ac"
	I0311 14:01:26.512293    4147 logs.go:123] Gathering logs for kube-controller-manager [08875e3858c8] ...
	I0311 14:01:26.512305    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08875e3858c8"
	I0311 14:01:26.529257    4147 logs.go:123] Gathering logs for Docker ...
	I0311 14:01:26.529267    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:01:26.553135    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 14:01:26.553146    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:01:26.557872    4147 logs.go:123] Gathering logs for coredns [6459803421c7] ...
	I0311 14:01:26.557879    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6459803421c7"
	I0311 14:01:26.569707    4147 logs.go:123] Gathering logs for storage-provisioner [e3c3c6294347] ...
	I0311 14:01:26.569720    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3c3c6294347"
	I0311 14:01:26.581436    4147 logs.go:123] Gathering logs for container status ...
	I0311 14:01:26.581448    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:01:26.593380    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 14:01:26.593393    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:01:26.632451    4147 logs.go:123] Gathering logs for coredns [2ea39dc73261] ...
	I0311 14:01:26.632463    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea39dc73261"
	I0311 14:01:26.647527    4147 logs.go:123] Gathering logs for kube-scheduler [5f3d696666c1] ...
	I0311 14:01:26.647540    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f3d696666c1"
	I0311 14:01:26.661806    4147 logs.go:123] Gathering logs for kube-apiserver [cd6426b65374] ...
	I0311 14:01:26.661818    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6426b65374"
	I0311 14:01:26.675906    4147 logs.go:123] Gathering logs for coredns [2dd7a5b4c30b] ...
	I0311 14:01:26.675917    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd7a5b4c30b"
	I0311 14:01:26.688338    4147 logs.go:123] Gathering logs for coredns [9472bc52aa3f] ...
	I0311 14:01:26.688348    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9472bc52aa3f"
	I0311 14:01:26.700416    4147 logs.go:123] Gathering logs for kube-proxy [bff0a9595bb6] ...
	I0311 14:01:26.700426    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff0a9595bb6"
	I0311 14:01:29.214505    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:01:34.216980    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:01:34.217101    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:01:34.229174    4147 logs.go:276] 1 containers: [cd6426b65374]
	I0311 14:01:34.229246    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:01:34.240796    4147 logs.go:276] 1 containers: [c78c5ca8b4ac]
	I0311 14:01:34.240862    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:01:34.252700    4147 logs.go:276] 4 containers: [6459803421c7 2ea39dc73261 2dd7a5b4c30b 9472bc52aa3f]
	I0311 14:01:34.252776    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:01:34.263202    4147 logs.go:276] 1 containers: [5f3d696666c1]
	I0311 14:01:34.263273    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:01:34.273465    4147 logs.go:276] 1 containers: [bff0a9595bb6]
	I0311 14:01:34.273530    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:01:34.287959    4147 logs.go:276] 1 containers: [08875e3858c8]
	I0311 14:01:34.288031    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:01:34.297764    4147 logs.go:276] 0 containers: []
	W0311 14:01:34.297780    4147 logs.go:278] No container was found matching "kindnet"
	I0311 14:01:34.297834    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:01:34.312413    4147 logs.go:276] 1 containers: [e3c3c6294347]
	I0311 14:01:34.312433    4147 logs.go:123] Gathering logs for coredns [2ea39dc73261] ...
	I0311 14:01:34.312438    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea39dc73261"
	I0311 14:01:34.325142    4147 logs.go:123] Gathering logs for storage-provisioner [e3c3c6294347] ...
	I0311 14:01:34.325153    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3c3c6294347"
	I0311 14:01:34.336830    4147 logs.go:123] Gathering logs for Docker ...
	I0311 14:01:34.336843    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:01:34.360341    4147 logs.go:123] Gathering logs for etcd [c78c5ca8b4ac] ...
	I0311 14:01:34.360349    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78c5ca8b4ac"
	I0311 14:01:34.382172    4147 logs.go:123] Gathering logs for coredns [9472bc52aa3f] ...
	I0311 14:01:34.382183    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9472bc52aa3f"
	I0311 14:01:34.393846    4147 logs.go:123] Gathering logs for kube-proxy [bff0a9595bb6] ...
	I0311 14:01:34.393858    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff0a9595bb6"
	I0311 14:01:34.405724    4147 logs.go:123] Gathering logs for coredns [2dd7a5b4c30b] ...
	I0311 14:01:34.405737    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd7a5b4c30b"
	I0311 14:01:34.418879    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:01:34.418889    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:01:34.457868    4147 logs.go:123] Gathering logs for kube-apiserver [cd6426b65374] ...
	I0311 14:01:34.457881    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6426b65374"
	I0311 14:01:34.472410    4147 logs.go:123] Gathering logs for coredns [6459803421c7] ...
	I0311 14:01:34.472420    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6459803421c7"
	I0311 14:01:34.484011    4147 logs.go:123] Gathering logs for kube-scheduler [5f3d696666c1] ...
	I0311 14:01:34.484023    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f3d696666c1"
	I0311 14:01:34.499167    4147 logs.go:123] Gathering logs for kube-controller-manager [08875e3858c8] ...
	I0311 14:01:34.499177    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08875e3858c8"
	I0311 14:01:34.517053    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 14:01:34.517067    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:01:34.522222    4147 logs.go:123] Gathering logs for container status ...
	I0311 14:01:34.522235    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:01:34.535678    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 14:01:34.535693    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:01:37.076954    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:01:42.079210    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:01:42.079538    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:01:42.108511    4147 logs.go:276] 1 containers: [cd6426b65374]
	I0311 14:01:42.108664    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:01:42.130720    4147 logs.go:276] 1 containers: [c78c5ca8b4ac]
	I0311 14:01:42.130807    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:01:42.143636    4147 logs.go:276] 4 containers: [6459803421c7 2ea39dc73261 2dd7a5b4c30b 9472bc52aa3f]
	I0311 14:01:42.143714    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:01:42.154261    4147 logs.go:276] 1 containers: [5f3d696666c1]
	I0311 14:01:42.154325    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:01:42.166401    4147 logs.go:276] 1 containers: [bff0a9595bb6]
	I0311 14:01:42.166469    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:01:42.181236    4147 logs.go:276] 1 containers: [08875e3858c8]
	I0311 14:01:42.181311    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:01:42.192206    4147 logs.go:276] 0 containers: []
	W0311 14:01:42.192217    4147 logs.go:278] No container was found matching "kindnet"
	I0311 14:01:42.192274    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:01:42.202553    4147 logs.go:276] 1 containers: [e3c3c6294347]
	I0311 14:01:42.202570    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 14:01:42.202576    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:01:42.206750    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:01:42.206758    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:01:42.243139    4147 logs.go:123] Gathering logs for coredns [6459803421c7] ...
	I0311 14:01:42.243150    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6459803421c7"
	I0311 14:01:42.254903    4147 logs.go:123] Gathering logs for coredns [2dd7a5b4c30b] ...
	I0311 14:01:42.254912    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd7a5b4c30b"
	I0311 14:01:42.266141    4147 logs.go:123] Gathering logs for kube-scheduler [5f3d696666c1] ...
	I0311 14:01:42.266154    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f3d696666c1"
	I0311 14:01:42.281313    4147 logs.go:123] Gathering logs for kube-proxy [bff0a9595bb6] ...
	I0311 14:01:42.281324    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff0a9595bb6"
	I0311 14:01:42.297899    4147 logs.go:123] Gathering logs for container status ...
	I0311 14:01:42.297910    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:01:42.309224    4147 logs.go:123] Gathering logs for Docker ...
	I0311 14:01:42.309237    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:01:42.334891    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 14:01:42.334899    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:01:42.373134    4147 logs.go:123] Gathering logs for kube-apiserver [cd6426b65374] ...
	I0311 14:01:42.373150    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6426b65374"
	I0311 14:01:42.389131    4147 logs.go:123] Gathering logs for etcd [c78c5ca8b4ac] ...
	I0311 14:01:42.389143    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78c5ca8b4ac"
	I0311 14:01:42.403241    4147 logs.go:123] Gathering logs for coredns [2ea39dc73261] ...
	I0311 14:01:42.403253    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea39dc73261"
	I0311 14:01:42.415140    4147 logs.go:123] Gathering logs for coredns [9472bc52aa3f] ...
	I0311 14:01:42.415152    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9472bc52aa3f"
	I0311 14:01:42.427457    4147 logs.go:123] Gathering logs for kube-controller-manager [08875e3858c8] ...
	I0311 14:01:42.427467    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08875e3858c8"
	I0311 14:01:42.445104    4147 logs.go:123] Gathering logs for storage-provisioner [e3c3c6294347] ...
	I0311 14:01:42.445116    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3c3c6294347"
	I0311 14:01:44.959221    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:01:49.961393    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:01:49.961526    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0311 14:01:49.979338    4147 logs.go:276] 1 containers: [cd6426b65374]
	I0311 14:01:49.979413    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0311 14:01:49.989393    4147 logs.go:276] 1 containers: [c78c5ca8b4ac]
	I0311 14:01:49.989463    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0311 14:01:50.002160    4147 logs.go:276] 4 containers: [6459803421c7 2ea39dc73261 2dd7a5b4c30b 9472bc52aa3f]
	I0311 14:01:50.002229    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0311 14:01:50.012842    4147 logs.go:276] 1 containers: [5f3d696666c1]
	I0311 14:01:50.012913    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0311 14:01:50.031226    4147 logs.go:276] 1 containers: [bff0a9595bb6]
	I0311 14:01:50.031292    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0311 14:01:50.041727    4147 logs.go:276] 1 containers: [08875e3858c8]
	I0311 14:01:50.041791    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0311 14:01:50.052254    4147 logs.go:276] 0 containers: []
	W0311 14:01:50.052266    4147 logs.go:278] No container was found matching "kindnet"
	I0311 14:01:50.052323    4147 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0311 14:01:50.063203    4147 logs.go:276] 1 containers: [e3c3c6294347]
	I0311 14:01:50.063220    4147 logs.go:123] Gathering logs for coredns [2ea39dc73261] ...
	I0311 14:01:50.063226    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ea39dc73261"
	I0311 14:01:50.078913    4147 logs.go:123] Gathering logs for kube-proxy [bff0a9595bb6] ...
	I0311 14:01:50.078923    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff0a9595bb6"
	I0311 14:01:50.090871    4147 logs.go:123] Gathering logs for kube-controller-manager [08875e3858c8] ...
	I0311 14:01:50.090883    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08875e3858c8"
	I0311 14:01:50.107798    4147 logs.go:123] Gathering logs for etcd [c78c5ca8b4ac] ...
	I0311 14:01:50.107811    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c78c5ca8b4ac"
	I0311 14:01:50.122029    4147 logs.go:123] Gathering logs for coredns [6459803421c7] ...
	I0311 14:01:50.122041    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6459803421c7"
	I0311 14:01:50.133829    4147 logs.go:123] Gathering logs for coredns [2dd7a5b4c30b] ...
	I0311 14:01:50.133842    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dd7a5b4c30b"
	I0311 14:01:50.145544    4147 logs.go:123] Gathering logs for storage-provisioner [e3c3c6294347] ...
	I0311 14:01:50.145559    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3c3c6294347"
	I0311 14:01:50.162112    4147 logs.go:123] Gathering logs for describe nodes ...
	I0311 14:01:50.162123    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0311 14:01:50.197485    4147 logs.go:123] Gathering logs for kube-apiserver [cd6426b65374] ...
	I0311 14:01:50.197499    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd6426b65374"
	I0311 14:01:50.212415    4147 logs.go:123] Gathering logs for container status ...
	I0311 14:01:50.212425    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0311 14:01:50.223933    4147 logs.go:123] Gathering logs for kubelet ...
	I0311 14:01:50.223946    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0311 14:01:50.261679    4147 logs.go:123] Gathering logs for dmesg ...
	I0311 14:01:50.261688    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0311 14:01:50.265691    4147 logs.go:123] Gathering logs for coredns [9472bc52aa3f] ...
	I0311 14:01:50.265700    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9472bc52aa3f"
	I0311 14:01:50.278542    4147 logs.go:123] Gathering logs for kube-scheduler [5f3d696666c1] ...
	I0311 14:01:50.281941    4147 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f3d696666c1"
	I0311 14:01:50.297043    4147 logs.go:123] Gathering logs for Docker ...
	I0311 14:01:50.297053    4147 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0311 14:01:52.822830    4147 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0311 14:01:57.825164    4147 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0311 14:01:57.831431    4147 out.go:177] 
	W0311 14:01:57.835487    4147 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0311 14:01:57.835525    4147 out.go:239] * 
	* 
	W0311 14:01:57.838131    4147 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:01:57.851394    4147 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-517000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (617.45s)

                                                
                                    
x
+
TestPause/serial/Start (10.12s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-044000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-044000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.058024667s)

                                                
                                                
-- stdout --
	* [pause-044000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-044000" primary control-plane node in "pause-044000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-044000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-044000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-044000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-044000 -n pause-044000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-044000 -n pause-044000: exit status 7 (58.630083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-044000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-371000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-371000 --driver=qemu2 : exit status 80 (10.044029917s)

                                                
                                                
-- stdout --
	* [NoKubernetes-371000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-371000" primary control-plane node in "NoKubernetes-371000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-371000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-371000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-371000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-371000 -n NoKubernetes-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-371000 -n NoKubernetes-371000: exit status 7 (64.112875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-371000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-371000 --no-kubernetes --driver=qemu2 : exit status 80 (5.862934458s)

                                                
                                                
-- stdout --
	* [NoKubernetes-371000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-371000
	* Restarting existing qemu2 VM for "NoKubernetes-371000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-371000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-371000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-371000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-371000 -n NoKubernetes-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-371000 -n NoKubernetes-371000: exit status 7 (70.543084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-371000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-371000 --no-kubernetes --driver=qemu2 : exit status 80 (5.893083125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-371000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-371000
	* Restarting existing qemu2 VM for "NoKubernetes-371000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-371000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-371000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-371000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-371000 -n NoKubernetes-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-371000 -n NoKubernetes-371000: exit status 7 (39.104542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.93s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.45s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.45s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.97s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-371000 --driver=qemu2 
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18358
- KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2118057779/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-371000 --driver=qemu2 : exit status 80 (5.833911292s)

                                                
                                                
-- stdout --
	* [NoKubernetes-371000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-371000
	* Restarting existing qemu2 VM for "NoKubernetes-371000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-371000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-371000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-371000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-371000 -n NoKubernetes-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-371000 -n NoKubernetes-371000: exit status 7 (34.629792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.840488583s)

                                                
                                                
-- stdout --
	* [auto-425000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-425000" primary control-plane node in "auto-425000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-425000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:03:45.840026    4655 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:03:45.840157    4655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:03:45.840161    4655 out.go:304] Setting ErrFile to fd 2...
	I0311 14:03:45.840167    4655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:03:45.840284    4655 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:03:45.841387    4655 out.go:298] Setting JSON to false
	I0311 14:03:45.857465    4655 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3796,"bootTime":1710187229,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 14:03:45.857526    4655 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 14:03:45.863900    4655 out.go:177] * [auto-425000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 14:03:45.869840    4655 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 14:03:45.873855    4655 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 14:03:45.869875    4655 notify.go:220] Checking for updates...
	I0311 14:03:45.879820    4655 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 14:03:45.882896    4655 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 14:03:45.884337    4655 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 14:03:45.887835    4655 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 14:03:45.891274    4655 config.go:182] Loaded profile config "cert-expiration-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:03:45.891341    4655 config.go:182] Loaded profile config "multinode-457000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:03:45.891392    4655 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 14:03:45.895694    4655 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 14:03:45.902844    4655 start.go:297] selected driver: qemu2
	I0311 14:03:45.902850    4655 start.go:901] validating driver "qemu2" against <nil>
	I0311 14:03:45.902856    4655 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 14:03:45.905086    4655 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 14:03:45.907871    4655 out.go:177] * Automatically selected the socket_vmnet network
	I0311 14:03:45.910994    4655 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 14:03:45.911021    4655 cni.go:84] Creating CNI manager for ""
	I0311 14:03:45.911029    4655 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 14:03:45.911039    4655 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 14:03:45.911069    4655 start.go:340] cluster config:
	{Name:auto-425000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:03:45.915508    4655 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:03:45.921789    4655 out.go:177] * Starting "auto-425000" primary control-plane node in "auto-425000" cluster
	I0311 14:03:45.925843    4655 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 14:03:45.925856    4655 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 14:03:45.925868    4655 cache.go:56] Caching tarball of preloaded images
	I0311 14:03:45.925931    4655 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 14:03:45.925937    4655 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 14:03:45.925998    4655 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/auto-425000/config.json ...
	I0311 14:03:45.926009    4655 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/auto-425000/config.json: {Name:mkb75a4e8deff193b47f15ab9ae9ec28f972d95f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 14:03:45.926219    4655 start.go:360] acquireMachinesLock for auto-425000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:03:45.926252    4655 start.go:364] duration metric: took 26.542µs to acquireMachinesLock for "auto-425000"
	I0311 14:03:45.926263    4655 start.go:93] Provisioning new machine with config: &{Name:auto-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:auto-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:03:45.926292    4655 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:03:45.934850    4655 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 14:03:45.953173    4655 start.go:159] libmachine.API.Create for "auto-425000" (driver="qemu2")
	I0311 14:03:45.953200    4655 client.go:168] LocalClient.Create starting
	I0311 14:03:45.953255    4655 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:03:45.953286    4655 main.go:141] libmachine: Decoding PEM data...
	I0311 14:03:45.953297    4655 main.go:141] libmachine: Parsing certificate...
	I0311 14:03:45.953344    4655 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:03:45.953368    4655 main.go:141] libmachine: Decoding PEM data...
	I0311 14:03:45.953377    4655 main.go:141] libmachine: Parsing certificate...
	I0311 14:03:45.953794    4655 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:03:46.088739    4655 main.go:141] libmachine: Creating SSH key...
	I0311 14:03:46.123299    4655 main.go:141] libmachine: Creating Disk image...
	I0311 14:03:46.123305    4655 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:03:46.123472    4655 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/auto-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/auto-425000/disk.qcow2
	I0311 14:03:46.135622    4655 main.go:141] libmachine: STDOUT: 
	I0311 14:03:46.135639    4655 main.go:141] libmachine: STDERR: 
	I0311 14:03:46.135700    4655 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/auto-425000/disk.qcow2 +20000M
	I0311 14:03:46.146086    4655 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:03:46.146102    4655 main.go:141] libmachine: STDERR: 
	I0311 14:03:46.146120    4655 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/auto-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/auto-425000/disk.qcow2
	I0311 14:03:46.146125    4655 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:03:46.146149    4655 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/auto-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/auto-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/auto-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:45:0f:21:54:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/auto-425000/disk.qcow2
	I0311 14:03:46.147728    4655 main.go:141] libmachine: STDOUT: 
	I0311 14:03:46.147744    4655 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:03:46.147760    4655 client.go:171] duration metric: took 194.561791ms to LocalClient.Create
	I0311 14:03:48.149949    4655 start.go:128] duration metric: took 2.22368525s to createHost
	I0311 14:03:48.150058    4655 start.go:83] releasing machines lock for "auto-425000", held for 2.22386825s
	W0311 14:03:48.150119    4655 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:03:48.161062    4655 out.go:177] * Deleting "auto-425000" in qemu2 ...
	W0311 14:03:48.188431    4655 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:03:48.188461    4655 start.go:728] Will try again in 5 seconds ...
	I0311 14:03:53.190572    4655 start.go:360] acquireMachinesLock for auto-425000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:03:53.190951    4655 start.go:364] duration metric: took 272.333µs to acquireMachinesLock for "auto-425000"
	I0311 14:03:53.191080    4655 start.go:93] Provisioning new machine with config: &{Name:auto-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:auto-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:03:53.191493    4655 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:03:53.202229    4655 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 14:03:53.251255    4655 start.go:159] libmachine.API.Create for "auto-425000" (driver="qemu2")
	I0311 14:03:53.251321    4655 client.go:168] LocalClient.Create starting
	I0311 14:03:53.251411    4655 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:03:53.251469    4655 main.go:141] libmachine: Decoding PEM data...
	I0311 14:03:53.251487    4655 main.go:141] libmachine: Parsing certificate...
	I0311 14:03:53.251551    4655 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:03:53.251599    4655 main.go:141] libmachine: Decoding PEM data...
	I0311 14:03:53.251612    4655 main.go:141] libmachine: Parsing certificate...
	I0311 14:03:53.252126    4655 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:03:53.397075    4655 main.go:141] libmachine: Creating SSH key...
	I0311 14:03:53.575062    4655 main.go:141] libmachine: Creating Disk image...
	I0311 14:03:53.575071    4655 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:03:53.575253    4655 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/auto-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/auto-425000/disk.qcow2
	I0311 14:03:53.587715    4655 main.go:141] libmachine: STDOUT: 
	I0311 14:03:53.587736    4655 main.go:141] libmachine: STDERR: 
	I0311 14:03:53.587786    4655 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/auto-425000/disk.qcow2 +20000M
	I0311 14:03:53.598303    4655 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:03:53.598321    4655 main.go:141] libmachine: STDERR: 
	I0311 14:03:53.598332    4655 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/auto-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/auto-425000/disk.qcow2
	I0311 14:03:53.598337    4655 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:03:53.598374    4655 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/auto-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/auto-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/auto-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:27:53:0d:9f:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/auto-425000/disk.qcow2
	I0311 14:03:53.600025    4655 main.go:141] libmachine: STDOUT: 
	I0311 14:03:53.600043    4655 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:03:53.600055    4655 client.go:171] duration metric: took 348.740334ms to LocalClient.Create
	I0311 14:03:55.602177    4655 start.go:128] duration metric: took 2.410730208s to createHost
	I0311 14:03:55.602225    4655 start.go:83] releasing machines lock for "auto-425000", held for 2.411328s
	W0311 14:03:55.602589    4655 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:03:55.616164    4655 out.go:177] 
	W0311 14:03:55.620392    4655 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:03:55.620439    4655 out.go:239] * 
	* 
	W0311 14:03:55.622974    4655 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:03:55.635288    4655 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.810868708s)

                                                
                                                
-- stdout --
	* [calico-425000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-425000" primary control-plane node in "calico-425000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-425000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:03:57.925959    4769 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:03:57.926118    4769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:03:57.926121    4769 out.go:304] Setting ErrFile to fd 2...
	I0311 14:03:57.926123    4769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:03:57.926261    4769 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:03:57.927312    4769 out.go:298] Setting JSON to false
	I0311 14:03:57.943269    4769 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3808,"bootTime":1710187229,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 14:03:57.943332    4769 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 14:03:57.950332    4769 out.go:177] * [calico-425000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 14:03:57.957268    4769 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 14:03:57.962325    4769 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 14:03:57.957322    4769 notify.go:220] Checking for updates...
	I0311 14:03:57.968191    4769 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 14:03:57.971278    4769 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 14:03:57.974318    4769 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 14:03:57.977233    4769 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 14:03:57.980652    4769 config.go:182] Loaded profile config "cert-expiration-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:03:57.980717    4769 config.go:182] Loaded profile config "multinode-457000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:03:57.980765    4769 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 14:03:57.985249    4769 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 14:03:57.992260    4769 start.go:297] selected driver: qemu2
	I0311 14:03:57.992267    4769 start.go:901] validating driver "qemu2" against <nil>
	I0311 14:03:57.992273    4769 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 14:03:57.994563    4769 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 14:03:57.998285    4769 out.go:177] * Automatically selected the socket_vmnet network
	I0311 14:03:58.001377    4769 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 14:03:58.001421    4769 cni.go:84] Creating CNI manager for "calico"
	I0311 14:03:58.001426    4769 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0311 14:03:58.001456    4769 start.go:340] cluster config:
	{Name:calico-425000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:03:58.006086    4769 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:03:58.013261    4769 out.go:177] * Starting "calico-425000" primary control-plane node in "calico-425000" cluster
	I0311 14:03:58.017274    4769 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 14:03:58.017295    4769 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 14:03:58.017306    4769 cache.go:56] Caching tarball of preloaded images
	I0311 14:03:58.017370    4769 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 14:03:58.017384    4769 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 14:03:58.017441    4769 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/calico-425000/config.json ...
	I0311 14:03:58.017453    4769 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/calico-425000/config.json: {Name:mkcfa6bd95acd2326dc26e0695802072f94aaf7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 14:03:58.017668    4769 start.go:360] acquireMachinesLock for calico-425000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:03:58.017700    4769 start.go:364] duration metric: took 26.875µs to acquireMachinesLock for "calico-425000"
	I0311 14:03:58.017712    4769 start.go:93] Provisioning new machine with config: &{Name:calico-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:calico-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:03:58.017738    4769 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:03:58.026283    4769 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 14:03:58.044475    4769 start.go:159] libmachine.API.Create for "calico-425000" (driver="qemu2")
	I0311 14:03:58.044500    4769 client.go:168] LocalClient.Create starting
	I0311 14:03:58.044566    4769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:03:58.044599    4769 main.go:141] libmachine: Decoding PEM data...
	I0311 14:03:58.044609    4769 main.go:141] libmachine: Parsing certificate...
	I0311 14:03:58.044658    4769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:03:58.044681    4769 main.go:141] libmachine: Decoding PEM data...
	I0311 14:03:58.044687    4769 main.go:141] libmachine: Parsing certificate...
	I0311 14:03:58.045044    4769 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:03:58.179802    4769 main.go:141] libmachine: Creating SSH key...
	I0311 14:03:58.267766    4769 main.go:141] libmachine: Creating Disk image...
	I0311 14:03:58.267776    4769 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:03:58.267937    4769 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/calico-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/calico-425000/disk.qcow2
	I0311 14:03:58.279962    4769 main.go:141] libmachine: STDOUT: 
	I0311 14:03:58.279992    4769 main.go:141] libmachine: STDERR: 
	I0311 14:03:58.280044    4769 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/calico-425000/disk.qcow2 +20000M
	I0311 14:03:58.291285    4769 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:03:58.291300    4769 main.go:141] libmachine: STDERR: 
	I0311 14:03:58.291312    4769 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/calico-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/calico-425000/disk.qcow2
	I0311 14:03:58.291316    4769 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:03:58.291350    4769 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/calico-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/calico-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/calico-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:5e:a6:32:51:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/calico-425000/disk.qcow2
	I0311 14:03:58.293148    4769 main.go:141] libmachine: STDOUT: 
	I0311 14:03:58.293164    4769 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:03:58.293187    4769 client.go:171] duration metric: took 248.689167ms to LocalClient.Create
	I0311 14:04:00.295299    4769 start.go:128] duration metric: took 2.277612833s to createHost
	I0311 14:04:00.295370    4769 start.go:83] releasing machines lock for "calico-425000", held for 2.277732958s
	W0311 14:04:00.295454    4769 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:04:00.311567    4769 out.go:177] * Deleting "calico-425000" in qemu2 ...
	W0311 14:04:00.335850    4769 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:04:00.335881    4769 start.go:728] Will try again in 5 seconds ...
	I0311 14:04:05.337958    4769 start.go:360] acquireMachinesLock for calico-425000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:04:05.338361    4769 start.go:364] duration metric: took 318.167µs to acquireMachinesLock for "calico-425000"
	I0311 14:04:05.338530    4769 start.go:93] Provisioning new machine with config: &{Name:calico-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:calico-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:04:05.338862    4769 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:04:05.347578    4769 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 14:04:05.396520    4769 start.go:159] libmachine.API.Create for "calico-425000" (driver="qemu2")
	I0311 14:04:05.396584    4769 client.go:168] LocalClient.Create starting
	I0311 14:04:05.396694    4769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:04:05.396746    4769 main.go:141] libmachine: Decoding PEM data...
	I0311 14:04:05.396764    4769 main.go:141] libmachine: Parsing certificate...
	I0311 14:04:05.396833    4769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:04:05.396875    4769 main.go:141] libmachine: Decoding PEM data...
	I0311 14:04:05.396887    4769 main.go:141] libmachine: Parsing certificate...
	I0311 14:04:05.397449    4769 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:04:05.542583    4769 main.go:141] libmachine: Creating SSH key...
	I0311 14:04:05.629704    4769 main.go:141] libmachine: Creating Disk image...
	I0311 14:04:05.629710    4769 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:04:05.629876    4769 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/calico-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/calico-425000/disk.qcow2
	I0311 14:04:05.642226    4769 main.go:141] libmachine: STDOUT: 
	I0311 14:04:05.642248    4769 main.go:141] libmachine: STDERR: 
	I0311 14:04:05.642297    4769 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/calico-425000/disk.qcow2 +20000M
	I0311 14:04:05.652757    4769 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:04:05.652776    4769 main.go:141] libmachine: STDERR: 
	I0311 14:04:05.652797    4769 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/calico-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/calico-425000/disk.qcow2
	I0311 14:04:05.652803    4769 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:04:05.652832    4769 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/calico-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/calico-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/calico-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:18:d8:da:b7:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/calico-425000/disk.qcow2
	I0311 14:04:05.654506    4769 main.go:141] libmachine: STDOUT: 
	I0311 14:04:05.654520    4769 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:04:05.654531    4769 client.go:171] duration metric: took 257.950791ms to LocalClient.Create
	I0311 14:04:07.656640    4769 start.go:128] duration metric: took 2.317813291s to createHost
	I0311 14:04:07.656691    4769 start.go:83] releasing machines lock for "calico-425000", held for 2.318382125s
	W0311 14:04:07.657050    4769 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:04:07.669721    4769 out.go:177] 
	W0311 14:04:07.675373    4769 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:04:07.675413    4769 out.go:239] * 
	* 
	W0311 14:04:07.677221    4769 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:04:07.687243    4769 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.793369625s)

                                                
                                                
-- stdout --
	* [custom-flannel-425000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-425000" primary control-plane node in "custom-flannel-425000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-425000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:04:10.151612    4888 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:04:10.151738    4888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:04:10.151741    4888 out.go:304] Setting ErrFile to fd 2...
	I0311 14:04:10.151744    4888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:04:10.151876    4888 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:04:10.152904    4888 out.go:298] Setting JSON to false
	I0311 14:04:10.168896    4888 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3821,"bootTime":1710187229,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 14:04:10.168968    4888 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 14:04:10.175372    4888 out.go:177] * [custom-flannel-425000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 14:04:10.182468    4888 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 14:04:10.186416    4888 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 14:04:10.182514    4888 notify.go:220] Checking for updates...
	I0311 14:04:10.192442    4888 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 14:04:10.195352    4888 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 14:04:10.198438    4888 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 14:04:10.201453    4888 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 14:04:10.204936    4888 config.go:182] Loaded profile config "cert-expiration-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:04:10.205004    4888 config.go:182] Loaded profile config "multinode-457000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:04:10.205057    4888 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 14:04:10.209387    4888 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 14:04:10.216387    4888 start.go:297] selected driver: qemu2
	I0311 14:04:10.216392    4888 start.go:901] validating driver "qemu2" against <nil>
	I0311 14:04:10.216399    4888 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 14:04:10.218638    4888 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 14:04:10.222377    4888 out.go:177] * Automatically selected the socket_vmnet network
	I0311 14:04:10.225517    4888 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 14:04:10.225558    4888 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0311 14:04:10.225567    4888 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0311 14:04:10.225602    4888 start.go:340] cluster config:
	{Name:custom-flannel-425000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:04:10.230169    4888 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:04:10.238434    4888 out.go:177] * Starting "custom-flannel-425000" primary control-plane node in "custom-flannel-425000" cluster
	I0311 14:04:10.241392    4888 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 14:04:10.241406    4888 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 14:04:10.241419    4888 cache.go:56] Caching tarball of preloaded images
	I0311 14:04:10.241493    4888 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 14:04:10.241506    4888 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 14:04:10.241584    4888 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/custom-flannel-425000/config.json ...
	I0311 14:04:10.241594    4888 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/custom-flannel-425000/config.json: {Name:mkeb0f131028c0f9f402c0dd697ca1b0382cd164 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 14:04:10.241796    4888 start.go:360] acquireMachinesLock for custom-flannel-425000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:04:10.241834    4888 start.go:364] duration metric: took 26.792µs to acquireMachinesLock for "custom-flannel-425000"
	I0311 14:04:10.241845    4888 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:04:10.241872    4888 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:04:10.248387    4888 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 14:04:10.265031    4888 start.go:159] libmachine.API.Create for "custom-flannel-425000" (driver="qemu2")
	I0311 14:04:10.265060    4888 client.go:168] LocalClient.Create starting
	I0311 14:04:10.265114    4888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:04:10.265140    4888 main.go:141] libmachine: Decoding PEM data...
	I0311 14:04:10.265150    4888 main.go:141] libmachine: Parsing certificate...
	I0311 14:04:10.265191    4888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:04:10.265213    4888 main.go:141] libmachine: Decoding PEM data...
	I0311 14:04:10.265220    4888 main.go:141] libmachine: Parsing certificate...
	I0311 14:04:10.265558    4888 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:04:10.400559    4888 main.go:141] libmachine: Creating SSH key...
	I0311 14:04:10.469739    4888 main.go:141] libmachine: Creating Disk image...
	I0311 14:04:10.469747    4888 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:04:10.469917    4888 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/custom-flannel-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/custom-flannel-425000/disk.qcow2
	I0311 14:04:10.481929    4888 main.go:141] libmachine: STDOUT: 
	I0311 14:04:10.481957    4888 main.go:141] libmachine: STDERR: 
	I0311 14:04:10.482005    4888 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/custom-flannel-425000/disk.qcow2 +20000M
	I0311 14:04:10.492781    4888 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:04:10.492796    4888 main.go:141] libmachine: STDERR: 
	I0311 14:04:10.492815    4888 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/custom-flannel-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/custom-flannel-425000/disk.qcow2
	I0311 14:04:10.492818    4888 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:04:10.492857    4888 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/custom-flannel-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/custom-flannel-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/custom-flannel-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:0f:1c:a7:a6:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/custom-flannel-425000/disk.qcow2
	I0311 14:04:10.494555    4888 main.go:141] libmachine: STDOUT: 
	I0311 14:04:10.494572    4888 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:04:10.494591    4888 client.go:171] duration metric: took 229.530792ms to LocalClient.Create
	I0311 14:04:12.496748    4888 start.go:128] duration metric: took 2.254922417s to createHost
	I0311 14:04:12.496818    4888 start.go:83] releasing machines lock for "custom-flannel-425000", held for 2.255047792s
	W0311 14:04:12.496869    4888 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:04:12.507010    4888 out.go:177] * Deleting "custom-flannel-425000" in qemu2 ...
	W0311 14:04:12.535357    4888 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:04:12.535386    4888 start.go:728] Will try again in 5 seconds ...
	I0311 14:04:17.537401    4888 start.go:360] acquireMachinesLock for custom-flannel-425000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:04:17.537789    4888 start.go:364] duration metric: took 294.125µs to acquireMachinesLock for "custom-flannel-425000"
	I0311 14:04:17.537903    4888 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:04:17.538244    4888 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:04:17.548019    4888 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 14:04:17.597164    4888 start.go:159] libmachine.API.Create for "custom-flannel-425000" (driver="qemu2")
	I0311 14:04:17.597203    4888 client.go:168] LocalClient.Create starting
	I0311 14:04:17.597325    4888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:04:17.597388    4888 main.go:141] libmachine: Decoding PEM data...
	I0311 14:04:17.597405    4888 main.go:141] libmachine: Parsing certificate...
	I0311 14:04:17.597473    4888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:04:17.597515    4888 main.go:141] libmachine: Decoding PEM data...
	I0311 14:04:17.597529    4888 main.go:141] libmachine: Parsing certificate...
	I0311 14:04:17.598079    4888 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:04:17.743624    4888 main.go:141] libmachine: Creating SSH key...
	I0311 14:04:17.838550    4888 main.go:141] libmachine: Creating Disk image...
	I0311 14:04:17.838555    4888 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:04:17.838739    4888 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/custom-flannel-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/custom-flannel-425000/disk.qcow2
	I0311 14:04:17.850953    4888 main.go:141] libmachine: STDOUT: 
	I0311 14:04:17.850981    4888 main.go:141] libmachine: STDERR: 
	I0311 14:04:17.851058    4888 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/custom-flannel-425000/disk.qcow2 +20000M
	I0311 14:04:17.861451    4888 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:04:17.861467    4888 main.go:141] libmachine: STDERR: 
	I0311 14:04:17.861484    4888 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/custom-flannel-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/custom-flannel-425000/disk.qcow2
	I0311 14:04:17.861497    4888 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:04:17.861525    4888 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/custom-flannel-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/custom-flannel-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/custom-flannel-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:40:7b:21:37:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/custom-flannel-425000/disk.qcow2
	I0311 14:04:17.863196    4888 main.go:141] libmachine: STDOUT: 
	I0311 14:04:17.863213    4888 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:04:17.863224    4888 client.go:171] duration metric: took 266.02375ms to LocalClient.Create
	I0311 14:04:19.865340    4888 start.go:128] duration metric: took 2.327138958s to createHost
	I0311 14:04:19.865461    4888 start.go:83] releasing machines lock for "custom-flannel-425000", held for 2.327662791s
	W0311 14:04:19.865927    4888 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:04:19.880526    4888 out.go:177] 
	W0311 14:04:19.883564    4888 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:04:19.883605    4888 out.go:239] * 
	* 
	W0311 14:04:19.886274    4888 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:04:19.898525    4888 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
E0311 14:04:28.991131    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.854477792s)

                                                
                                                
-- stdout --
	* [false-425000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-425000" primary control-plane node in "false-425000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-425000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:04:22.411343    5006 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:04:22.411478    5006 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:04:22.411482    5006 out.go:304] Setting ErrFile to fd 2...
	I0311 14:04:22.411484    5006 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:04:22.411634    5006 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:04:22.412648    5006 out.go:298] Setting JSON to false
	I0311 14:04:22.428690    5006 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3833,"bootTime":1710187229,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 14:04:22.428751    5006 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 14:04:22.435086    5006 out.go:177] * [false-425000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 14:04:22.441207    5006 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 14:04:22.445176    5006 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 14:04:22.441259    5006 notify.go:220] Checking for updates...
	I0311 14:04:22.451165    5006 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 14:04:22.454205    5006 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 14:04:22.457218    5006 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 14:04:22.460222    5006 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 14:04:22.463532    5006 config.go:182] Loaded profile config "cert-expiration-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:04:22.463603    5006 config.go:182] Loaded profile config "multinode-457000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:04:22.463647    5006 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 14:04:22.468214    5006 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 14:04:22.474134    5006 start.go:297] selected driver: qemu2
	I0311 14:04:22.474140    5006 start.go:901] validating driver "qemu2" against <nil>
	I0311 14:04:22.474146    5006 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 14:04:22.476458    5006 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 14:04:22.479154    5006 out.go:177] * Automatically selected the socket_vmnet network
	I0311 14:04:22.482298    5006 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 14:04:22.482317    5006 cni.go:84] Creating CNI manager for "false"
	I0311 14:04:22.482336    5006 start.go:340] cluster config:
	{Name:false-425000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:false-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:04:22.486882    5006 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:04:22.495146    5006 out.go:177] * Starting "false-425000" primary control-plane node in "false-425000" cluster
	I0311 14:04:22.498135    5006 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 14:04:22.498150    5006 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 14:04:22.498163    5006 cache.go:56] Caching tarball of preloaded images
	I0311 14:04:22.498216    5006 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 14:04:22.498223    5006 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 14:04:22.498274    5006 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/false-425000/config.json ...
	I0311 14:04:22.498289    5006 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/false-425000/config.json: {Name:mkeb04a020fa6169cb9564050cb65d5de201e310 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 14:04:22.498501    5006 start.go:360] acquireMachinesLock for false-425000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:04:22.498534    5006 start.go:364] duration metric: took 27.542µs to acquireMachinesLock for "false-425000"
	I0311 14:04:22.498546    5006 start.go:93] Provisioning new machine with config: &{Name:false-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:false-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:04:22.498587    5006 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:04:22.505170    5006 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 14:04:22.523492    5006 start.go:159] libmachine.API.Create for "false-425000" (driver="qemu2")
	I0311 14:04:22.523513    5006 client.go:168] LocalClient.Create starting
	I0311 14:04:22.523561    5006 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:04:22.523590    5006 main.go:141] libmachine: Decoding PEM data...
	I0311 14:04:22.523599    5006 main.go:141] libmachine: Parsing certificate...
	I0311 14:04:22.523642    5006 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:04:22.523669    5006 main.go:141] libmachine: Decoding PEM data...
	I0311 14:04:22.523677    5006 main.go:141] libmachine: Parsing certificate...
	I0311 14:04:22.524094    5006 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:04:22.659423    5006 main.go:141] libmachine: Creating SSH key...
	I0311 14:04:22.790402    5006 main.go:141] libmachine: Creating Disk image...
	I0311 14:04:22.790409    5006 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:04:22.790589    5006 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/false-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/false-425000/disk.qcow2
	I0311 14:04:22.802969    5006 main.go:141] libmachine: STDOUT: 
	I0311 14:04:22.802991    5006 main.go:141] libmachine: STDERR: 
	I0311 14:04:22.803040    5006 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/false-425000/disk.qcow2 +20000M
	I0311 14:04:22.813818    5006 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:04:22.813834    5006 main.go:141] libmachine: STDERR: 
	I0311 14:04:22.813855    5006 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/false-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/false-425000/disk.qcow2
	I0311 14:04:22.813860    5006 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:04:22.813884    5006 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/false-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/false-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/false-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:31:e4:95:c0:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/false-425000/disk.qcow2
	I0311 14:04:22.815532    5006 main.go:141] libmachine: STDOUT: 
	I0311 14:04:22.815547    5006 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:04:22.815574    5006 client.go:171] duration metric: took 292.055708ms to LocalClient.Create
	I0311 14:04:24.817754    5006 start.go:128] duration metric: took 2.319211292s to createHost
	I0311 14:04:24.817869    5006 start.go:83] releasing machines lock for "false-425000", held for 2.319376167s
	W0311 14:04:24.817937    5006 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:04:24.829054    5006 out.go:177] * Deleting "false-425000" in qemu2 ...
	W0311 14:04:24.855228    5006 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:04:24.855254    5006 start.go:728] Will try again in 5 seconds ...
	I0311 14:04:29.857351    5006 start.go:360] acquireMachinesLock for false-425000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:04:29.857784    5006 start.go:364] duration metric: took 287.167µs to acquireMachinesLock for "false-425000"
	I0311 14:04:29.857898    5006 start.go:93] Provisioning new machine with config: &{Name:false-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:false-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:04:29.858142    5006 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:04:29.867813    5006 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 14:04:29.917507    5006 start.go:159] libmachine.API.Create for "false-425000" (driver="qemu2")
	I0311 14:04:29.917588    5006 client.go:168] LocalClient.Create starting
	I0311 14:04:29.917694    5006 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:04:29.917755    5006 main.go:141] libmachine: Decoding PEM data...
	I0311 14:04:29.917777    5006 main.go:141] libmachine: Parsing certificate...
	I0311 14:04:29.917836    5006 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:04:29.917877    5006 main.go:141] libmachine: Decoding PEM data...
	I0311 14:04:29.917907    5006 main.go:141] libmachine: Parsing certificate...
	I0311 14:04:29.918494    5006 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:04:30.064163    5006 main.go:141] libmachine: Creating SSH key...
	I0311 14:04:30.167332    5006 main.go:141] libmachine: Creating Disk image...
	I0311 14:04:30.167343    5006 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:04:30.167537    5006 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/false-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/false-425000/disk.qcow2
	I0311 14:04:30.179953    5006 main.go:141] libmachine: STDOUT: 
	I0311 14:04:30.179975    5006 main.go:141] libmachine: STDERR: 
	I0311 14:04:30.180041    5006 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/false-425000/disk.qcow2 +20000M
	I0311 14:04:30.190589    5006 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:04:30.190603    5006 main.go:141] libmachine: STDERR: 
	I0311 14:04:30.190618    5006 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/false-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/false-425000/disk.qcow2
	I0311 14:04:30.190627    5006 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:04:30.190672    5006 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/false-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/false-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/false-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:a2:a3:71:2c:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/false-425000/disk.qcow2
	I0311 14:04:30.192360    5006 main.go:141] libmachine: STDOUT: 
	I0311 14:04:30.192376    5006 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:04:30.192393    5006 client.go:171] duration metric: took 274.806583ms to LocalClient.Create
	I0311 14:04:32.194080    5006 start.go:128] duration metric: took 2.335985666s to createHost
	I0311 14:04:32.194112    5006 start.go:83] releasing machines lock for "false-425000", held for 2.336383291s
	W0311 14:04:32.194319    5006 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:04:32.204949    5006 out.go:177] 
	W0311 14:04:32.208005    5006 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:04:32.208040    5006 out.go:239] * 
	* 
	W0311 14:04:32.210849    5006 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:04:32.222896    5006 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.793355333s)

                                                
                                                
-- stdout --
	* [kindnet-425000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-425000" primary control-plane node in "kindnet-425000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-425000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:04:34.536112    5121 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:04:34.536246    5121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:04:34.536250    5121 out.go:304] Setting ErrFile to fd 2...
	I0311 14:04:34.536253    5121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:04:34.536377    5121 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:04:34.537399    5121 out.go:298] Setting JSON to false
	I0311 14:04:34.553310    5121 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3845,"bootTime":1710187229,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 14:04:34.553367    5121 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 14:04:34.559107    5121 out.go:177] * [kindnet-425000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 14:04:34.566130    5121 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 14:04:34.570064    5121 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 14:04:34.566191    5121 notify.go:220] Checking for updates...
	I0311 14:04:34.577098    5121 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 14:04:34.581044    5121 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 14:04:34.584114    5121 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 14:04:34.587142    5121 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 14:04:34.590462    5121 config.go:182] Loaded profile config "cert-expiration-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:04:34.590537    5121 config.go:182] Loaded profile config "multinode-457000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:04:34.590582    5121 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 14:04:34.595076    5121 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 14:04:34.602073    5121 start.go:297] selected driver: qemu2
	I0311 14:04:34.602079    5121 start.go:901] validating driver "qemu2" against <nil>
	I0311 14:04:34.602086    5121 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 14:04:34.604406    5121 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 14:04:34.609080    5121 out.go:177] * Automatically selected the socket_vmnet network
	I0311 14:04:34.612217    5121 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 14:04:34.612254    5121 cni.go:84] Creating CNI manager for "kindnet"
	I0311 14:04:34.612258    5121 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0311 14:04:34.612288    5121 start.go:340] cluster config:
	{Name:kindnet-425000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:04:34.616936    5121 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:04:34.629055    5121 out.go:177] * Starting "kindnet-425000" primary control-plane node in "kindnet-425000" cluster
	I0311 14:04:34.632025    5121 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 14:04:34.632040    5121 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 14:04:34.632060    5121 cache.go:56] Caching tarball of preloaded images
	I0311 14:04:34.632141    5121 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 14:04:34.632149    5121 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 14:04:34.632210    5121 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/kindnet-425000/config.json ...
	I0311 14:04:34.632222    5121 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/kindnet-425000/config.json: {Name:mk38ebb565d9c382fb004c674a8894ada0d62745 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 14:04:34.632453    5121 start.go:360] acquireMachinesLock for kindnet-425000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:04:34.632491    5121 start.go:364] duration metric: took 30.083µs to acquireMachinesLock for "kindnet-425000"
	I0311 14:04:34.632504    5121 start.go:93] Provisioning new machine with config: &{Name:kindnet-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kindnet-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:04:34.632533    5121 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:04:34.641261    5121 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 14:04:34.660138    5121 start.go:159] libmachine.API.Create for "kindnet-425000" (driver="qemu2")
	I0311 14:04:34.660161    5121 client.go:168] LocalClient.Create starting
	I0311 14:04:34.660216    5121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:04:34.660246    5121 main.go:141] libmachine: Decoding PEM data...
	I0311 14:04:34.660257    5121 main.go:141] libmachine: Parsing certificate...
	I0311 14:04:34.660301    5121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:04:34.660325    5121 main.go:141] libmachine: Decoding PEM data...
	I0311 14:04:34.660332    5121 main.go:141] libmachine: Parsing certificate...
	I0311 14:04:34.660713    5121 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:04:34.799373    5121 main.go:141] libmachine: Creating SSH key...
	I0311 14:04:34.843468    5121 main.go:141] libmachine: Creating Disk image...
	I0311 14:04:34.843473    5121 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:04:34.843636    5121 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kindnet-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kindnet-425000/disk.qcow2
	I0311 14:04:34.855976    5121 main.go:141] libmachine: STDOUT: 
	I0311 14:04:34.855995    5121 main.go:141] libmachine: STDERR: 
	I0311 14:04:34.856045    5121 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kindnet-425000/disk.qcow2 +20000M
	I0311 14:04:34.866568    5121 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:04:34.866584    5121 main.go:141] libmachine: STDERR: 
	I0311 14:04:34.866598    5121 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kindnet-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kindnet-425000/disk.qcow2
	I0311 14:04:34.866602    5121 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:04:34.866643    5121 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kindnet-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kindnet-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kindnet-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:64:69:55:27:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kindnet-425000/disk.qcow2
	I0311 14:04:34.868349    5121 main.go:141] libmachine: STDOUT: 
	I0311 14:04:34.868366    5121 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:04:34.868384    5121 client.go:171] duration metric: took 208.224458ms to LocalClient.Create
	I0311 14:04:36.869680    5121 start.go:128] duration metric: took 2.237189583s to createHost
	I0311 14:04:36.869758    5121 start.go:83] releasing machines lock for "kindnet-425000", held for 2.237328417s
	W0311 14:04:36.869814    5121 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:04:36.880970    5121 out.go:177] * Deleting "kindnet-425000" in qemu2 ...
	W0311 14:04:36.905677    5121 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:04:36.905716    5121 start.go:728] Will try again in 5 seconds ...
	I0311 14:04:41.907802    5121 start.go:360] acquireMachinesLock for kindnet-425000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:04:41.908223    5121 start.go:364] duration metric: took 338.583µs to acquireMachinesLock for "kindnet-425000"
	I0311 14:04:41.908327    5121 start.go:93] Provisioning new machine with config: &{Name:kindnet-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kindnet-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:04:41.908586    5121 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:04:41.919029    5121 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 14:04:41.969751    5121 start.go:159] libmachine.API.Create for "kindnet-425000" (driver="qemu2")
	I0311 14:04:41.969798    5121 client.go:168] LocalClient.Create starting
	I0311 14:04:41.969897    5121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:04:41.969959    5121 main.go:141] libmachine: Decoding PEM data...
	I0311 14:04:41.969974    5121 main.go:141] libmachine: Parsing certificate...
	I0311 14:04:41.970029    5121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:04:41.970070    5121 main.go:141] libmachine: Decoding PEM data...
	I0311 14:04:41.970085    5121 main.go:141] libmachine: Parsing certificate...
	I0311 14:04:41.970604    5121 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:04:42.115558    5121 main.go:141] libmachine: Creating SSH key...
	I0311 14:04:42.225126    5121 main.go:141] libmachine: Creating Disk image...
	I0311 14:04:42.225131    5121 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:04:42.225304    5121 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kindnet-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kindnet-425000/disk.qcow2
	I0311 14:04:42.237613    5121 main.go:141] libmachine: STDOUT: 
	I0311 14:04:42.237642    5121 main.go:141] libmachine: STDERR: 
	I0311 14:04:42.237708    5121 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kindnet-425000/disk.qcow2 +20000M
	I0311 14:04:42.248222    5121 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:04:42.248239    5121 main.go:141] libmachine: STDERR: 
	I0311 14:04:42.248253    5121 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kindnet-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kindnet-425000/disk.qcow2
	I0311 14:04:42.248259    5121 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:04:42.248292    5121 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kindnet-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kindnet-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kindnet-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:3e:4f:e3:c4:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kindnet-425000/disk.qcow2
	I0311 14:04:42.250035    5121 main.go:141] libmachine: STDOUT: 
	I0311 14:04:42.250049    5121 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:04:42.250061    5121 client.go:171] duration metric: took 280.26775ms to LocalClient.Create
	I0311 14:04:44.252167    5121 start.go:128] duration metric: took 2.3436265s to createHost
	I0311 14:04:44.252231    5121 start.go:83] releasing machines lock for "kindnet-425000", held for 2.344058459s
	W0311 14:04:44.252752    5121 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:04:44.266230    5121 out.go:177] 
	W0311 14:04:44.270422    5121 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:04:44.270450    5121 out.go:239] * 
	* 
	W0311 14:04:44.273069    5121 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:04:44.285278    5121 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.897400667s)

                                                
                                                
-- stdout --
	* [flannel-425000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-425000" primary control-plane node in "flannel-425000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-425000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:04:46.712589    5238 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:04:46.712722    5238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:04:46.712725    5238 out.go:304] Setting ErrFile to fd 2...
	I0311 14:04:46.712728    5238 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:04:46.712856    5238 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:04:46.713936    5238 out.go:298] Setting JSON to false
	I0311 14:04:46.730152    5238 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3857,"bootTime":1710187229,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 14:04:46.730204    5238 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 14:04:46.736102    5238 out.go:177] * [flannel-425000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 14:04:46.744073    5238 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 14:04:46.744123    5238 notify.go:220] Checking for updates...
	I0311 14:04:46.749995    5238 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 14:04:46.756968    5238 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 14:04:46.760026    5238 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 14:04:46.763051    5238 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 14:04:46.764534    5238 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 14:04:46.768443    5238 config.go:182] Loaded profile config "cert-expiration-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:04:46.768527    5238 config.go:182] Loaded profile config "multinode-457000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:04:46.768571    5238 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 14:04:46.772053    5238 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 14:04:46.778007    5238 start.go:297] selected driver: qemu2
	I0311 14:04:46.778012    5238 start.go:901] validating driver "qemu2" against <nil>
	I0311 14:04:46.778019    5238 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 14:04:46.780328    5238 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 14:04:46.784106    5238 out.go:177] * Automatically selected the socket_vmnet network
	I0311 14:04:46.788005    5238 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 14:04:46.788052    5238 cni.go:84] Creating CNI manager for "flannel"
	I0311 14:04:46.788057    5238 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0311 14:04:46.788093    5238 start.go:340] cluster config:
	{Name:flannel-425000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:flannel-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:04:46.792761    5238 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:04:46.801090    5238 out.go:177] * Starting "flannel-425000" primary control-plane node in "flannel-425000" cluster
	I0311 14:04:46.805043    5238 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 14:04:46.805059    5238 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 14:04:46.805073    5238 cache.go:56] Caching tarball of preloaded images
	I0311 14:04:46.805140    5238 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 14:04:46.805152    5238 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 14:04:46.805213    5238 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/flannel-425000/config.json ...
	I0311 14:04:46.805224    5238 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/flannel-425000/config.json: {Name:mka867de70156a918dfdd80c8636805c4286b2ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 14:04:46.805434    5238 start.go:360] acquireMachinesLock for flannel-425000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:04:46.805467    5238 start.go:364] duration metric: took 27.084µs to acquireMachinesLock for "flannel-425000"
	I0311 14:04:46.805479    5238 start.go:93] Provisioning new machine with config: &{Name:flannel-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:flannel-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:04:46.805511    5238 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:04:46.814039    5238 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 14:04:46.832586    5238 start.go:159] libmachine.API.Create for "flannel-425000" (driver="qemu2")
	I0311 14:04:46.832614    5238 client.go:168] LocalClient.Create starting
	I0311 14:04:46.832690    5238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:04:46.832720    5238 main.go:141] libmachine: Decoding PEM data...
	I0311 14:04:46.832732    5238 main.go:141] libmachine: Parsing certificate...
	I0311 14:04:46.832781    5238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:04:46.832805    5238 main.go:141] libmachine: Decoding PEM data...
	I0311 14:04:46.832814    5238 main.go:141] libmachine: Parsing certificate...
	I0311 14:04:46.833228    5238 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:04:46.967842    5238 main.go:141] libmachine: Creating SSH key...
	I0311 14:04:47.197592    5238 main.go:141] libmachine: Creating Disk image...
	I0311 14:04:47.197602    5238 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:04:47.197803    5238 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/flannel-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/flannel-425000/disk.qcow2
	I0311 14:04:47.210546    5238 main.go:141] libmachine: STDOUT: 
	I0311 14:04:47.210567    5238 main.go:141] libmachine: STDERR: 
	I0311 14:04:47.210624    5238 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/flannel-425000/disk.qcow2 +20000M
	I0311 14:04:47.221330    5238 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:04:47.221343    5238 main.go:141] libmachine: STDERR: 
	I0311 14:04:47.221357    5238 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/flannel-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/flannel-425000/disk.qcow2
	I0311 14:04:47.221365    5238 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:04:47.221403    5238 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/flannel-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/flannel-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/flannel-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:8b:f5:a2:fa:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/flannel-425000/disk.qcow2
	I0311 14:04:47.223115    5238 main.go:141] libmachine: STDOUT: 
	I0311 14:04:47.223131    5238 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:04:47.223149    5238 client.go:171] duration metric: took 390.542458ms to LocalClient.Create
	I0311 14:04:49.225293    5238 start.go:128] duration metric: took 2.419838583s to createHost
	I0311 14:04:49.225428    5238 start.go:83] releasing machines lock for "flannel-425000", held for 2.41996375s
	W0311 14:04:49.225495    5238 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:04:49.243670    5238 out.go:177] * Deleting "flannel-425000" in qemu2 ...
	W0311 14:04:49.269178    5238 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:04:49.269210    5238 start.go:728] Will try again in 5 seconds ...
	I0311 14:04:54.276746    5238 start.go:360] acquireMachinesLock for flannel-425000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:04:54.277313    5238 start.go:364] duration metric: took 435.25µs to acquireMachinesLock for "flannel-425000"
	I0311 14:04:54.277484    5238 start.go:93] Provisioning new machine with config: &{Name:flannel-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:flannel-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:04:54.277832    5238 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:04:54.287499    5238 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 14:04:54.336773    5238 start.go:159] libmachine.API.Create for "flannel-425000" (driver="qemu2")
	I0311 14:04:54.336815    5238 client.go:168] LocalClient.Create starting
	I0311 14:04:54.336937    5238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:04:54.336989    5238 main.go:141] libmachine: Decoding PEM data...
	I0311 14:04:54.337008    5238 main.go:141] libmachine: Parsing certificate...
	I0311 14:04:54.337072    5238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:04:54.337118    5238 main.go:141] libmachine: Decoding PEM data...
	I0311 14:04:54.337129    5238 main.go:141] libmachine: Parsing certificate...
	I0311 14:04:54.337733    5238 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:04:54.483063    5238 main.go:141] libmachine: Creating SSH key...
	I0311 14:04:54.515541    5238 main.go:141] libmachine: Creating Disk image...
	I0311 14:04:54.515546    5238 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:04:54.515702    5238 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/flannel-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/flannel-425000/disk.qcow2
	I0311 14:04:54.528026    5238 main.go:141] libmachine: STDOUT: 
	I0311 14:04:54.528061    5238 main.go:141] libmachine: STDERR: 
	I0311 14:04:54.528125    5238 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/flannel-425000/disk.qcow2 +20000M
	I0311 14:04:54.538672    5238 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:04:54.538688    5238 main.go:141] libmachine: STDERR: 
	I0311 14:04:54.538706    5238 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/flannel-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/flannel-425000/disk.qcow2
	I0311 14:04:54.538710    5238 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:04:54.538743    5238 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/flannel-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/flannel-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/flannel-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:5a:03:ba:6e:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/flannel-425000/disk.qcow2
	I0311 14:04:54.540434    5238 main.go:141] libmachine: STDOUT: 
	I0311 14:04:54.540449    5238 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:04:54.540466    5238 client.go:171] duration metric: took 203.445666ms to LocalClient.Create
	I0311 14:04:56.544428    5238 start.go:128] duration metric: took 2.264529333s to createHost
	I0311 14:04:56.544484    5238 start.go:83] releasing machines lock for "flannel-425000", held for 2.265079041s
	W0311 14:04:56.544915    5238 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:04:56.553421    5238 out.go:177] 
	W0311 14:04:56.559358    5238 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:04:56.559428    5238 out.go:239] * 
	* 
	W0311 14:04:56.562171    5238 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:04:56.571394    5238 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.910409708s)

                                                
                                                
-- stdout --
	* [enable-default-cni-425000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-425000" primary control-plane node in "enable-default-cni-425000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-425000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:04:59.102147    5356 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:04:59.102277    5356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:04:59.102280    5356 out.go:304] Setting ErrFile to fd 2...
	I0311 14:04:59.102283    5356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:04:59.102404    5356 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:04:59.103487    5356 out.go:298] Setting JSON to false
	I0311 14:04:59.119753    5356 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3870,"bootTime":1710187229,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 14:04:59.119809    5356 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 14:04:59.126533    5356 out.go:177] * [enable-default-cni-425000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 14:04:59.133496    5356 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 14:04:59.136567    5356 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 14:04:59.133557    5356 notify.go:220] Checking for updates...
	I0311 14:04:59.143561    5356 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 14:04:59.146549    5356 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 14:04:59.149589    5356 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 14:04:59.152522    5356 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 14:04:59.155858    5356 config.go:182] Loaded profile config "cert-expiration-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:04:59.155927    5356 config.go:182] Loaded profile config "multinode-457000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:04:59.155974    5356 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 14:04:59.160555    5356 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 14:04:59.167506    5356 start.go:297] selected driver: qemu2
	I0311 14:04:59.167511    5356 start.go:901] validating driver "qemu2" against <nil>
	I0311 14:04:59.167520    5356 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 14:04:59.169807    5356 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 14:04:59.174560    5356 out.go:177] * Automatically selected the socket_vmnet network
	E0311 14:04:59.177581    5356 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0311 14:04:59.177596    5356 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 14:04:59.177612    5356 cni.go:84] Creating CNI manager for "bridge"
	I0311 14:04:59.177616    5356 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 14:04:59.177648    5356 start.go:340] cluster config:
	{Name:enable-default-cni-425000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:04:59.182324    5356 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:04:59.188507    5356 out.go:177] * Starting "enable-default-cni-425000" primary control-plane node in "enable-default-cni-425000" cluster
	I0311 14:04:59.192537    5356 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 14:04:59.192554    5356 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 14:04:59.192568    5356 cache.go:56] Caching tarball of preloaded images
	I0311 14:04:59.192632    5356 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 14:04:59.192638    5356 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 14:04:59.192708    5356 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/enable-default-cni-425000/config.json ...
	I0311 14:04:59.192718    5356 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/enable-default-cni-425000/config.json: {Name:mk931641a9995a9d0def1bfbd6df2952c3e454c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 14:04:59.192933    5356 start.go:360] acquireMachinesLock for enable-default-cni-425000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:04:59.192967    5356 start.go:364] duration metric: took 26.625µs to acquireMachinesLock for "enable-default-cni-425000"
	I0311 14:04:59.192979    5356 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:04:59.193015    5356 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:04:59.201591    5356 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 14:04:59.220166    5356 start.go:159] libmachine.API.Create for "enable-default-cni-425000" (driver="qemu2")
	I0311 14:04:59.220197    5356 client.go:168] LocalClient.Create starting
	I0311 14:04:59.220273    5356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:04:59.220303    5356 main.go:141] libmachine: Decoding PEM data...
	I0311 14:04:59.220317    5356 main.go:141] libmachine: Parsing certificate...
	I0311 14:04:59.220361    5356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:04:59.220384    5356 main.go:141] libmachine: Decoding PEM data...
	I0311 14:04:59.220391    5356 main.go:141] libmachine: Parsing certificate...
	I0311 14:04:59.220794    5356 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:04:59.356893    5356 main.go:141] libmachine: Creating SSH key...
	I0311 14:04:59.522166    5356 main.go:141] libmachine: Creating Disk image...
	I0311 14:04:59.522174    5356 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:04:59.522351    5356 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/enable-default-cni-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/enable-default-cni-425000/disk.qcow2
	I0311 14:04:59.534942    5356 main.go:141] libmachine: STDOUT: 
	I0311 14:04:59.534959    5356 main.go:141] libmachine: STDERR: 
	I0311 14:04:59.535014    5356 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/enable-default-cni-425000/disk.qcow2 +20000M
	I0311 14:04:59.545543    5356 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:04:59.545566    5356 main.go:141] libmachine: STDERR: 
	I0311 14:04:59.545583    5356 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/enable-default-cni-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/enable-default-cni-425000/disk.qcow2
	I0311 14:04:59.545588    5356 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:04:59.545617    5356 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/enable-default-cni-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/enable-default-cni-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/enable-default-cni-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:08:fc:d2:9b:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/enable-default-cni-425000/disk.qcow2
	I0311 14:04:59.547335    5356 main.go:141] libmachine: STDOUT: 
	I0311 14:04:59.547349    5356 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:04:59.547367    5356 client.go:171] duration metric: took 326.93575ms to LocalClient.Create
	I0311 14:05:01.549126    5356 start.go:128] duration metric: took 2.354565459s to createHost
	I0311 14:05:01.549218    5356 start.go:83] releasing machines lock for "enable-default-cni-425000", held for 2.35471575s
	W0311 14:05:01.549275    5356 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:05:01.562196    5356 out.go:177] * Deleting "enable-default-cni-425000" in qemu2 ...
	W0311 14:05:01.587542    5356 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:05:01.587571    5356 start.go:728] Will try again in 5 seconds ...
	I0311 14:05:06.591144    5356 start.go:360] acquireMachinesLock for enable-default-cni-425000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:05:06.591605    5356 start.go:364] duration metric: took 347.25µs to acquireMachinesLock for "enable-default-cni-425000"
	I0311 14:05:06.591737    5356 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:05:06.592012    5356 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:05:06.601304    5356 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 14:05:06.649289    5356 start.go:159] libmachine.API.Create for "enable-default-cni-425000" (driver="qemu2")
	I0311 14:05:06.649343    5356 client.go:168] LocalClient.Create starting
	I0311 14:05:06.649436    5356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:05:06.649490    5356 main.go:141] libmachine: Decoding PEM data...
	I0311 14:05:06.649506    5356 main.go:141] libmachine: Parsing certificate...
	I0311 14:05:06.649561    5356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:05:06.649600    5356 main.go:141] libmachine: Decoding PEM data...
	I0311 14:05:06.649610    5356 main.go:141] libmachine: Parsing certificate...
	I0311 14:05:06.650136    5356 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:05:06.794142    5356 main.go:141] libmachine: Creating SSH key...
	I0311 14:05:06.909003    5356 main.go:141] libmachine: Creating Disk image...
	I0311 14:05:06.909013    5356 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:05:06.909192    5356 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/enable-default-cni-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/enable-default-cni-425000/disk.qcow2
	I0311 14:05:06.921328    5356 main.go:141] libmachine: STDOUT: 
	I0311 14:05:06.921348    5356 main.go:141] libmachine: STDERR: 
	I0311 14:05:06.921406    5356 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/enable-default-cni-425000/disk.qcow2 +20000M
	I0311 14:05:06.931925    5356 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:05:06.931942    5356 main.go:141] libmachine: STDERR: 
	I0311 14:05:06.931955    5356 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/enable-default-cni-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/enable-default-cni-425000/disk.qcow2
	I0311 14:05:06.931965    5356 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:05:06.932010    5356 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/enable-default-cni-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/enable-default-cni-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/enable-default-cni-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:42:7f:60:99:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/enable-default-cni-425000/disk.qcow2
	I0311 14:05:06.933719    5356 main.go:141] libmachine: STDOUT: 
	I0311 14:05:06.933733    5356 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:05:06.933746    5356 client.go:171] duration metric: took 284.282709ms to LocalClient.Create
	I0311 14:05:08.936688    5356 start.go:128] duration metric: took 2.343731208s to createHost
	I0311 14:05:08.936756    5356 start.go:83] releasing machines lock for "enable-default-cni-425000", held for 2.344213583s
	W0311 14:05:08.937150    5356 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:05:08.952716    5356 out.go:177] 
	W0311 14:05:08.956929    5356 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:05:08.956959    5356 out.go:239] * 
	* 
	W0311 14:05:08.959555    5356 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:05:08.971792    5356 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.744649s)

                                                
                                                
-- stdout --
	* [bridge-425000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-425000" primary control-plane node in "bridge-425000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-425000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:05:11.333535    5466 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:05:11.333673    5466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:05:11.333676    5466 out.go:304] Setting ErrFile to fd 2...
	I0311 14:05:11.333678    5466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:05:11.333807    5466 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:05:11.334888    5466 out.go:298] Setting JSON to false
	I0311 14:05:11.350797    5466 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3882,"bootTime":1710187229,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 14:05:11.350862    5466 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 14:05:11.357378    5466 out.go:177] * [bridge-425000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 14:05:11.364231    5466 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 14:05:11.367495    5466 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 14:05:11.364270    5466 notify.go:220] Checking for updates...
	I0311 14:05:11.374314    5466 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 14:05:11.378373    5466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 14:05:11.381377    5466 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 14:05:11.384358    5466 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 14:05:11.387773    5466 config.go:182] Loaded profile config "cert-expiration-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:05:11.387841    5466 config.go:182] Loaded profile config "multinode-457000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:05:11.387887    5466 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 14:05:11.391396    5466 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 14:05:11.398345    5466 start.go:297] selected driver: qemu2
	I0311 14:05:11.398351    5466 start.go:901] validating driver "qemu2" against <nil>
	I0311 14:05:11.398359    5466 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 14:05:11.400666    5466 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 14:05:11.403378    5466 out.go:177] * Automatically selected the socket_vmnet network
	I0311 14:05:11.406408    5466 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 14:05:11.406461    5466 cni.go:84] Creating CNI manager for "bridge"
	I0311 14:05:11.406465    5466 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 14:05:11.406505    5466 start.go:340] cluster config:
	{Name:bridge-425000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:bridge-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:05:11.411170    5466 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:05:11.419249    5466 out.go:177] * Starting "bridge-425000" primary control-plane node in "bridge-425000" cluster
	I0311 14:05:11.423422    5466 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 14:05:11.423439    5466 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 14:05:11.423451    5466 cache.go:56] Caching tarball of preloaded images
	I0311 14:05:11.423526    5466 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 14:05:11.423532    5466 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 14:05:11.423607    5466 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/bridge-425000/config.json ...
	I0311 14:05:11.423618    5466 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/bridge-425000/config.json: {Name:mkdabe6f6efe734648b1dcbef71229d8763aaebc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 14:05:11.423837    5466 start.go:360] acquireMachinesLock for bridge-425000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:05:11.423871    5466 start.go:364] duration metric: took 27.75µs to acquireMachinesLock for "bridge-425000"
	I0311 14:05:11.423884    5466 start.go:93] Provisioning new machine with config: &{Name:bridge-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:bridge-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:05:11.423912    5466 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:05:11.432352    5466 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 14:05:11.451195    5466 start.go:159] libmachine.API.Create for "bridge-425000" (driver="qemu2")
	I0311 14:05:11.451228    5466 client.go:168] LocalClient.Create starting
	I0311 14:05:11.451296    5466 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:05:11.451328    5466 main.go:141] libmachine: Decoding PEM data...
	I0311 14:05:11.451339    5466 main.go:141] libmachine: Parsing certificate...
	I0311 14:05:11.451386    5466 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:05:11.451410    5466 main.go:141] libmachine: Decoding PEM data...
	I0311 14:05:11.451417    5466 main.go:141] libmachine: Parsing certificate...
	I0311 14:05:11.451832    5466 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:05:11.583836    5466 main.go:141] libmachine: Creating SSH key...
	I0311 14:05:11.632762    5466 main.go:141] libmachine: Creating Disk image...
	I0311 14:05:11.632767    5466 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:05:11.632956    5466 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/bridge-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/bridge-425000/disk.qcow2
	I0311 14:05:11.645022    5466 main.go:141] libmachine: STDOUT: 
	I0311 14:05:11.645042    5466 main.go:141] libmachine: STDERR: 
	I0311 14:05:11.645099    5466 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/bridge-425000/disk.qcow2 +20000M
	I0311 14:05:11.655727    5466 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:05:11.655746    5466 main.go:141] libmachine: STDERR: 
	I0311 14:05:11.655758    5466 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/bridge-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/bridge-425000/disk.qcow2
	I0311 14:05:11.655763    5466 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:05:11.655802    5466 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/bridge-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/bridge-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/bridge-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:eb:57:8c:17:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/bridge-425000/disk.qcow2
	I0311 14:05:11.657520    5466 main.go:141] libmachine: STDOUT: 
	I0311 14:05:11.657535    5466 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:05:11.657551    5466 client.go:171] duration metric: took 206.2565ms to LocalClient.Create
	I0311 14:05:13.660338    5466 start.go:128] duration metric: took 2.235770834s to createHost
	I0311 14:05:13.660504    5466 start.go:83] releasing machines lock for "bridge-425000", held for 2.235926s
	W0311 14:05:13.660564    5466 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:05:13.675760    5466 out.go:177] * Deleting "bridge-425000" in qemu2 ...
	W0311 14:05:13.700163    5466 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:05:13.700198    5466 start.go:728] Will try again in 5 seconds ...
	I0311 14:05:18.703517    5466 start.go:360] acquireMachinesLock for bridge-425000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:05:18.704035    5466 start.go:364] duration metric: took 326.5µs to acquireMachinesLock for "bridge-425000"
	I0311 14:05:18.704184    5466 start.go:93] Provisioning new machine with config: &{Name:bridge-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:bridge-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:05:18.704464    5466 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:05:18.716954    5466 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 14:05:18.766224    5466 start.go:159] libmachine.API.Create for "bridge-425000" (driver="qemu2")
	I0311 14:05:18.766267    5466 client.go:168] LocalClient.Create starting
	I0311 14:05:18.766380    5466 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:05:18.766448    5466 main.go:141] libmachine: Decoding PEM data...
	I0311 14:05:18.766464    5466 main.go:141] libmachine: Parsing certificate...
	I0311 14:05:18.766521    5466 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:05:18.766563    5466 main.go:141] libmachine: Decoding PEM data...
	I0311 14:05:18.766574    5466 main.go:141] libmachine: Parsing certificate...
	I0311 14:05:18.767106    5466 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:05:18.910940    5466 main.go:141] libmachine: Creating SSH key...
	I0311 14:05:18.975590    5466 main.go:141] libmachine: Creating Disk image...
	I0311 14:05:18.975595    5466 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:05:18.975760    5466 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/bridge-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/bridge-425000/disk.qcow2
	I0311 14:05:18.987989    5466 main.go:141] libmachine: STDOUT: 
	I0311 14:05:18.988010    5466 main.go:141] libmachine: STDERR: 
	I0311 14:05:18.988069    5466 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/bridge-425000/disk.qcow2 +20000M
	I0311 14:05:18.998667    5466 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:05:18.998697    5466 main.go:141] libmachine: STDERR: 
	I0311 14:05:18.998712    5466 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/bridge-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/bridge-425000/disk.qcow2
	I0311 14:05:18.998715    5466 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:05:18.998751    5466 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/bridge-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/bridge-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/bridge-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:7c:36:55:a7:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/bridge-425000/disk.qcow2
	I0311 14:05:19.000453    5466 main.go:141] libmachine: STDOUT: 
	I0311 14:05:19.000470    5466 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:05:19.000482    5466 client.go:171] duration metric: took 234.16975ms to LocalClient.Create
	I0311 14:05:21.002987    5466 start.go:128] duration metric: took 2.29811975s to createHost
	I0311 14:05:21.003055    5466 start.go:83] releasing machines lock for "bridge-425000", held for 2.298602625s
	W0311 14:05:21.003393    5466 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:05:21.018000    5466 out.go:177] 
	W0311 14:05:21.021129    5466 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:05:21.021165    5466 out.go:239] * 
	* 
	W0311 14:05:21.023781    5466 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:05:21.036011    5466 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-425000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.820192208s)

                                                
                                                
-- stdout --
	* [kubenet-425000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-425000" primary control-plane node in "kubenet-425000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-425000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:05:23.365852    5579 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:05:23.365999    5579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:05:23.366002    5579 out.go:304] Setting ErrFile to fd 2...
	I0311 14:05:23.366004    5579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:05:23.366125    5579 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:05:23.367197    5579 out.go:298] Setting JSON to false
	I0311 14:05:23.383070    5579 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3894,"bootTime":1710187229,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 14:05:23.383142    5579 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 14:05:23.389275    5579 out.go:177] * [kubenet-425000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 14:05:23.395212    5579 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 14:05:23.398236    5579 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 14:05:23.395292    5579 notify.go:220] Checking for updates...
	I0311 14:05:23.405194    5579 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 14:05:23.408231    5579 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 14:05:23.411167    5579 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 14:05:23.414190    5579 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 14:05:23.417533    5579 config.go:182] Loaded profile config "cert-expiration-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:05:23.417609    5579 config.go:182] Loaded profile config "multinode-457000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:05:23.417653    5579 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 14:05:23.421174    5579 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 14:05:23.428209    5579 start.go:297] selected driver: qemu2
	I0311 14:05:23.428215    5579 start.go:901] validating driver "qemu2" against <nil>
	I0311 14:05:23.428223    5579 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 14:05:23.430503    5579 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 14:05:23.432070    5579 out.go:177] * Automatically selected the socket_vmnet network
	I0311 14:05:23.435305    5579 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 14:05:23.435329    5579 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0311 14:05:23.435367    5579 start.go:340] cluster config:
	{Name:kubenet-425000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:05:23.439783    5579 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:05:23.448236    5579 out.go:177] * Starting "kubenet-425000" primary control-plane node in "kubenet-425000" cluster
	I0311 14:05:23.452178    5579 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 14:05:23.452196    5579 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 14:05:23.452216    5579 cache.go:56] Caching tarball of preloaded images
	I0311 14:05:23.452286    5579 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 14:05:23.452292    5579 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 14:05:23.452370    5579 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/kubenet-425000/config.json ...
	I0311 14:05:23.452382    5579 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/kubenet-425000/config.json: {Name:mkaa854ed76df8eab35a5fbc2f9c0a779cc0ec3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 14:05:23.452598    5579 start.go:360] acquireMachinesLock for kubenet-425000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:05:23.452633    5579 start.go:364] duration metric: took 27.167µs to acquireMachinesLock for "kubenet-425000"
	I0311 14:05:23.452645    5579 start.go:93] Provisioning new machine with config: &{Name:kubenet-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kubenet-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:05:23.452679    5579 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:05:23.461203    5579 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 14:05:23.478707    5579 start.go:159] libmachine.API.Create for "kubenet-425000" (driver="qemu2")
	I0311 14:05:23.478731    5579 client.go:168] LocalClient.Create starting
	I0311 14:05:23.478782    5579 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:05:23.478810    5579 main.go:141] libmachine: Decoding PEM data...
	I0311 14:05:23.478823    5579 main.go:141] libmachine: Parsing certificate...
	I0311 14:05:23.478865    5579 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:05:23.478887    5579 main.go:141] libmachine: Decoding PEM data...
	I0311 14:05:23.478893    5579 main.go:141] libmachine: Parsing certificate...
	I0311 14:05:23.479242    5579 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:05:23.613360    5579 main.go:141] libmachine: Creating SSH key...
	I0311 14:05:23.700318    5579 main.go:141] libmachine: Creating Disk image...
	I0311 14:05:23.700323    5579 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:05:23.700509    5579 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubenet-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubenet-425000/disk.qcow2
	I0311 14:05:23.713022    5579 main.go:141] libmachine: STDOUT: 
	I0311 14:05:23.713040    5579 main.go:141] libmachine: STDERR: 
	I0311 14:05:23.713088    5579 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubenet-425000/disk.qcow2 +20000M
	I0311 14:05:23.724138    5579 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:05:23.724156    5579 main.go:141] libmachine: STDERR: 
	I0311 14:05:23.724175    5579 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubenet-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubenet-425000/disk.qcow2
	I0311 14:05:23.724178    5579 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:05:23.724208    5579 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubenet-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubenet-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubenet-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:85:61:57:d6:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubenet-425000/disk.qcow2
	I0311 14:05:23.726094    5579 main.go:141] libmachine: STDOUT: 
	I0311 14:05:23.726112    5579 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:05:23.726129    5579 client.go:171] duration metric: took 247.362583ms to LocalClient.Create
	I0311 14:05:25.727981    5579 start.go:128] duration metric: took 2.27502125s to createHost
	I0311 14:05:25.728073    5579 start.go:83] releasing machines lock for "kubenet-425000", held for 2.275173708s
	W0311 14:05:25.728116    5579 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:05:25.738222    5579 out.go:177] * Deleting "kubenet-425000" in qemu2 ...
	W0311 14:05:25.768910    5579 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:05:25.768947    5579 start.go:728] Will try again in 5 seconds ...
	I0311 14:05:30.771524    5579 start.go:360] acquireMachinesLock for kubenet-425000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:05:30.771956    5579 start.go:364] duration metric: took 344.25µs to acquireMachinesLock for "kubenet-425000"
	I0311 14:05:30.772083    5579 start.go:93] Provisioning new machine with config: &{Name:kubenet-425000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kubenet-425000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:05:30.772415    5579 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:05:30.782078    5579 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0311 14:05:30.831106    5579 start.go:159] libmachine.API.Create for "kubenet-425000" (driver="qemu2")
	I0311 14:05:30.831186    5579 client.go:168] LocalClient.Create starting
	I0311 14:05:30.831337    5579 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:05:30.831398    5579 main.go:141] libmachine: Decoding PEM data...
	I0311 14:05:30.831416    5579 main.go:141] libmachine: Parsing certificate...
	I0311 14:05:30.831483    5579 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:05:30.831525    5579 main.go:141] libmachine: Decoding PEM data...
	I0311 14:05:30.831539    5579 main.go:141] libmachine: Parsing certificate...
	I0311 14:05:30.832073    5579 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:05:30.975233    5579 main.go:141] libmachine: Creating SSH key...
	I0311 14:05:31.083789    5579 main.go:141] libmachine: Creating Disk image...
	I0311 14:05:31.083799    5579 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:05:31.083967    5579 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubenet-425000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubenet-425000/disk.qcow2
	I0311 14:05:31.096113    5579 main.go:141] libmachine: STDOUT: 
	I0311 14:05:31.096131    5579 main.go:141] libmachine: STDERR: 
	I0311 14:05:31.096181    5579 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubenet-425000/disk.qcow2 +20000M
	I0311 14:05:31.106744    5579 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:05:31.106764    5579 main.go:141] libmachine: STDERR: 
	I0311 14:05:31.106782    5579 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubenet-425000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubenet-425000/disk.qcow2
	I0311 14:05:31.106787    5579 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:05:31.106828    5579 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubenet-425000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubenet-425000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubenet-425000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:2d:54:78:b9:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/kubenet-425000/disk.qcow2
	I0311 14:05:31.108508    5579 main.go:141] libmachine: STDOUT: 
	I0311 14:05:31.108523    5579 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:05:31.108534    5579 client.go:171] duration metric: took 277.313166ms to LocalClient.Create
	I0311 14:05:33.110841    5579 start.go:128] duration metric: took 2.338254709s to createHost
	I0311 14:05:33.110932    5579 start.go:83] releasing machines lock for "kubenet-425000", held for 2.33881175s
	W0311 14:05:33.111337    5579 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-425000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:05:33.124068    5579 out.go:177] 
	W0311 14:05:33.129101    5579 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:05:33.129158    5579 out.go:239] * 
	* 
	W0311 14:05:33.131725    5579 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:05:33.142018    5579 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-930000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-930000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.692763166s)

                                                
                                                
-- stdout --
	* [old-k8s-version-930000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-930000" primary control-plane node in "old-k8s-version-930000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-930000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:05:35.455053    5689 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:05:35.455189    5689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:05:35.455192    5689 out.go:304] Setting ErrFile to fd 2...
	I0311 14:05:35.455195    5689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:05:35.455319    5689 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:05:35.456369    5689 out.go:298] Setting JSON to false
	I0311 14:05:35.472331    5689 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3906,"bootTime":1710187229,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 14:05:35.472392    5689 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 14:05:35.478853    5689 out.go:177] * [old-k8s-version-930000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 14:05:35.485779    5689 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 14:05:35.489835    5689 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 14:05:35.485811    5689 notify.go:220] Checking for updates...
	I0311 14:05:35.493808    5689 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 14:05:35.496755    5689 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 14:05:35.499754    5689 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 14:05:35.502747    5689 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 14:05:35.506167    5689 config.go:182] Loaded profile config "cert-expiration-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:05:35.506238    5689 config.go:182] Loaded profile config "multinode-457000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:05:35.506292    5689 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 14:05:35.510853    5689 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 14:05:35.517818    5689 start.go:297] selected driver: qemu2
	I0311 14:05:35.517826    5689 start.go:901] validating driver "qemu2" against <nil>
	I0311 14:05:35.517834    5689 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 14:05:35.520153    5689 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 14:05:35.523755    5689 out.go:177] * Automatically selected the socket_vmnet network
	I0311 14:05:35.526839    5689 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 14:05:35.526884    5689 cni.go:84] Creating CNI manager for ""
	I0311 14:05:35.526892    5689 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0311 14:05:35.526915    5689 start.go:340] cluster config:
	{Name:old-k8s-version-930000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-930000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:05:35.531352    5689 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:05:35.539615    5689 out.go:177] * Starting "old-k8s-version-930000" primary control-plane node in "old-k8s-version-930000" cluster
	I0311 14:05:35.543764    5689 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0311 14:05:35.543780    5689 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0311 14:05:35.543799    5689 cache.go:56] Caching tarball of preloaded images
	I0311 14:05:35.543856    5689 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 14:05:35.543862    5689 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0311 14:05:35.543929    5689 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/old-k8s-version-930000/config.json ...
	I0311 14:05:35.543941    5689 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/old-k8s-version-930000/config.json: {Name:mkf3999f6152281dbb470f918b9e7bf9cca4430f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 14:05:35.544157    5689 start.go:360] acquireMachinesLock for old-k8s-version-930000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:05:35.544193    5689 start.go:364] duration metric: took 26.875µs to acquireMachinesLock for "old-k8s-version-930000"
	I0311 14:05:35.544204    5689 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-930000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-930000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:05:35.544245    5689 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:05:35.552737    5689 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 14:05:35.570214    5689 start.go:159] libmachine.API.Create for "old-k8s-version-930000" (driver="qemu2")
	I0311 14:05:35.570237    5689 client.go:168] LocalClient.Create starting
	I0311 14:05:35.570286    5689 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:05:35.570318    5689 main.go:141] libmachine: Decoding PEM data...
	I0311 14:05:35.570326    5689 main.go:141] libmachine: Parsing certificate...
	I0311 14:05:35.570372    5689 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:05:35.570395    5689 main.go:141] libmachine: Decoding PEM data...
	I0311 14:05:35.570403    5689 main.go:141] libmachine: Parsing certificate...
	I0311 14:05:35.570762    5689 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:05:35.701153    5689 main.go:141] libmachine: Creating SSH key...
	I0311 14:05:35.730811    5689 main.go:141] libmachine: Creating Disk image...
	I0311 14:05:35.730820    5689 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:05:35.731003    5689 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/disk.qcow2
	I0311 14:05:35.743447    5689 main.go:141] libmachine: STDOUT: 
	I0311 14:05:35.743466    5689 main.go:141] libmachine: STDERR: 
	I0311 14:05:35.743521    5689 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/disk.qcow2 +20000M
	I0311 14:05:35.754018    5689 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:05:35.754045    5689 main.go:141] libmachine: STDERR: 
	I0311 14:05:35.754064    5689 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/disk.qcow2
	I0311 14:05:35.754070    5689 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:05:35.754101    5689 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:9f:47:20:cb:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/disk.qcow2
	I0311 14:05:35.755783    5689 main.go:141] libmachine: STDOUT: 
	I0311 14:05:35.755801    5689 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:05:35.755818    5689 client.go:171] duration metric: took 185.568375ms to LocalClient.Create
	I0311 14:05:37.758155    5689 start.go:128] duration metric: took 2.213767208s to createHost
	I0311 14:05:37.758282    5689 start.go:83] releasing machines lock for "old-k8s-version-930000", held for 2.213994875s
	W0311 14:05:37.758353    5689 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:05:37.770462    5689 out.go:177] * Deleting "old-k8s-version-930000" in qemu2 ...
	W0311 14:05:37.795929    5689 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:05:37.795966    5689 start.go:728] Will try again in 5 seconds ...
	I0311 14:05:42.798307    5689 start.go:360] acquireMachinesLock for old-k8s-version-930000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:05:42.798753    5689 start.go:364] duration metric: took 327.25µs to acquireMachinesLock for "old-k8s-version-930000"
	I0311 14:05:42.798878    5689 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-930000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-930000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:05:42.799204    5689 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:05:42.808931    5689 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 14:05:42.858506    5689 start.go:159] libmachine.API.Create for "old-k8s-version-930000" (driver="qemu2")
	I0311 14:05:42.858560    5689 client.go:168] LocalClient.Create starting
	I0311 14:05:42.858683    5689 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:05:42.858760    5689 main.go:141] libmachine: Decoding PEM data...
	I0311 14:05:42.858782    5689 main.go:141] libmachine: Parsing certificate...
	I0311 14:05:42.858846    5689 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:05:42.858895    5689 main.go:141] libmachine: Decoding PEM data...
	I0311 14:05:42.858908    5689 main.go:141] libmachine: Parsing certificate...
	I0311 14:05:42.859465    5689 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:05:43.003431    5689 main.go:141] libmachine: Creating SSH key...
	I0311 14:05:43.045293    5689 main.go:141] libmachine: Creating Disk image...
	I0311 14:05:43.045299    5689 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:05:43.045482    5689 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/disk.qcow2
	I0311 14:05:43.057668    5689 main.go:141] libmachine: STDOUT: 
	I0311 14:05:43.057685    5689 main.go:141] libmachine: STDERR: 
	I0311 14:05:43.057738    5689 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/disk.qcow2 +20000M
	I0311 14:05:43.068628    5689 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:05:43.068644    5689 main.go:141] libmachine: STDERR: 
	I0311 14:05:43.068655    5689 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/disk.qcow2
	I0311 14:05:43.068662    5689 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:05:43.068706    5689 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:3e:b0:90:44:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/disk.qcow2
	I0311 14:05:43.070460    5689 main.go:141] libmachine: STDOUT: 
	I0311 14:05:43.070476    5689 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:05:43.070499    5689 client.go:171] duration metric: took 211.928959ms to LocalClient.Create
	I0311 14:05:45.072713    5689 start.go:128] duration metric: took 2.273448166s to createHost
	I0311 14:05:45.072791    5689 start.go:83] releasing machines lock for "old-k8s-version-930000", held for 2.273981041s
	W0311 14:05:45.073119    5689 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-930000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-930000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:05:45.087860    5689 out.go:177] 
	W0311 14:05:45.091031    5689 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:05:45.091056    5689 out.go:239] * 
	* 
	W0311 14:05:45.093623    5689 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:05:45.102883    5689 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-930000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-930000 -n old-k8s-version-930000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-930000 -n old-k8s-version-930000: exit status 7 (70.458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-930000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-930000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-930000 create -f testdata/busybox.yaml: exit status 1 (28.611709ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-930000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-930000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-930000 -n old-k8s-version-930000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-930000 -n old-k8s-version-930000: exit status 7 (32.316084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-930000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-930000 -n old-k8s-version-930000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-930000 -n old-k8s-version-930000: exit status 7 (32.404583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-930000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-930000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-930000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-930000 describe deploy/metrics-server -n kube-system: exit status 1 (26.699625ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-930000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-930000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-930000 -n old-k8s-version-930000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-930000 -n old-k8s-version-930000: exit status 7 (32.541625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-930000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-930000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-930000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.198114084s)

                                                
                                                
-- stdout --
	* [old-k8s-version-930000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-930000" primary control-plane node in "old-k8s-version-930000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-930000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-930000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:05:48.750690    5745 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:05:48.750859    5745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:05:48.750862    5745 out.go:304] Setting ErrFile to fd 2...
	I0311 14:05:48.750864    5745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:05:48.750996    5745 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:05:48.752080    5745 out.go:298] Setting JSON to false
	I0311 14:05:48.768162    5745 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3919,"bootTime":1710187229,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 14:05:48.768230    5745 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 14:05:48.773060    5745 out.go:177] * [old-k8s-version-930000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 14:05:48.779959    5745 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 14:05:48.783028    5745 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 14:05:48.780019    5745 notify.go:220] Checking for updates...
	I0311 14:05:48.790036    5745 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 14:05:48.793028    5745 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 14:05:48.796048    5745 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 14:05:48.799022    5745 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 14:05:48.802250    5745 config.go:182] Loaded profile config "old-k8s-version-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0311 14:05:48.806033    5745 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0311 14:05:48.808958    5745 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 14:05:48.812971    5745 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 14:05:48.819927    5745 start.go:297] selected driver: qemu2
	I0311 14:05:48.819934    5745 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-930000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-930000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:05:48.819998    5745 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 14:05:48.822323    5745 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 14:05:48.822351    5745 cni.go:84] Creating CNI manager for ""
	I0311 14:05:48.822358    5745 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0311 14:05:48.822382    5745 start.go:340] cluster config:
	{Name:old-k8s-version-930000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-930000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:05:48.826777    5745 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:05:48.834915    5745 out.go:177] * Starting "old-k8s-version-930000" primary control-plane node in "old-k8s-version-930000" cluster
	I0311 14:05:48.839020    5745 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0311 14:05:48.839035    5745 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0311 14:05:48.839050    5745 cache.go:56] Caching tarball of preloaded images
	I0311 14:05:48.839120    5745 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 14:05:48.839125    5745 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0311 14:05:48.839193    5745 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/old-k8s-version-930000/config.json ...
	I0311 14:05:48.839637    5745 start.go:360] acquireMachinesLock for old-k8s-version-930000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:05:48.839666    5745 start.go:364] duration metric: took 20.25µs to acquireMachinesLock for "old-k8s-version-930000"
	I0311 14:05:48.839674    5745 start.go:96] Skipping create...Using existing machine configuration
	I0311 14:05:48.839679    5745 fix.go:54] fixHost starting: 
	I0311 14:05:48.839800    5745 fix.go:112] recreateIfNeeded on old-k8s-version-930000: state=Stopped err=<nil>
	W0311 14:05:48.839808    5745 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 14:05:48.843918    5745 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-930000" ...
	I0311 14:05:48.852014    5745 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:3e:b0:90:44:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/disk.qcow2
	I0311 14:05:48.854063    5745 main.go:141] libmachine: STDOUT: 
	I0311 14:05:48.854085    5745 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:05:48.854113    5745 fix.go:56] duration metric: took 14.433542ms for fixHost
	I0311 14:05:48.854117    5745 start.go:83] releasing machines lock for "old-k8s-version-930000", held for 14.447709ms
	W0311 14:05:48.854123    5745 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:05:48.854156    5745 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:05:48.854161    5745 start.go:728] Will try again in 5 seconds ...
	I0311 14:05:53.855470    5745 start.go:360] acquireMachinesLock for old-k8s-version-930000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:05:53.855802    5745 start.go:364] duration metric: took 259.375µs to acquireMachinesLock for "old-k8s-version-930000"
	I0311 14:05:53.855915    5745 start.go:96] Skipping create...Using existing machine configuration
	I0311 14:05:53.855933    5745 fix.go:54] fixHost starting: 
	I0311 14:05:53.856621    5745 fix.go:112] recreateIfNeeded on old-k8s-version-930000: state=Stopped err=<nil>
	W0311 14:05:53.856644    5745 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 14:05:53.866930    5745 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-930000" ...
	I0311 14:05:53.870210    5745 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:3e:b0:90:44:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/old-k8s-version-930000/disk.qcow2
	I0311 14:05:53.880200    5745 main.go:141] libmachine: STDOUT: 
	I0311 14:05:53.880290    5745 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:05:53.880370    5745 fix.go:56] duration metric: took 24.43725ms for fixHost
	I0311 14:05:53.880390    5745 start.go:83] releasing machines lock for "old-k8s-version-930000", held for 24.565042ms
	W0311 14:05:53.880589    5745 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-930000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-930000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:05:53.888946    5745 out.go:177] 
	W0311 14:05:53.893040    5745 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:05:53.893071    5745 out.go:239] * 
	* 
	W0311 14:05:53.895849    5745 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:05:53.903845    5745 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-930000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-930000 -n old-k8s-version-930000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-930000 -n old-k8s-version-930000: exit status 7 (69.142458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-930000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-930000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-930000 -n old-k8s-version-930000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-930000 -n old-k8s-version-930000: exit status 7 (34.98075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-930000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-930000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-930000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-930000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.239542ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-930000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-930000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-930000 -n old-k8s-version-930000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-930000 -n old-k8s-version-930000: exit status 7 (32.930125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-930000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-930000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-930000 -n old-k8s-version-930000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-930000 -n old-k8s-version-930000: exit status 7 (32.265166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-930000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-930000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-930000 --alsologtostderr -v=1: exit status 83 (43.167708ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-930000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-930000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:05:54.189795    5764 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:05:54.190196    5764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:05:54.190200    5764 out.go:304] Setting ErrFile to fd 2...
	I0311 14:05:54.190202    5764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:05:54.190346    5764 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:05:54.190551    5764 out.go:298] Setting JSON to false
	I0311 14:05:54.190560    5764 mustload.go:65] Loading cluster: old-k8s-version-930000
	I0311 14:05:54.190755    5764 config.go:182] Loaded profile config "old-k8s-version-930000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0311 14:05:54.195106    5764 out.go:177] * The control-plane node old-k8s-version-930000 host is not running: state=Stopped
	I0311 14:05:54.198087    5764 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-930000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-930000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-930000 -n old-k8s-version-930000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-930000 -n old-k8s-version-930000: exit status 7 (32.491041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-930000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-930000 -n old-k8s-version-930000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-930000 -n old-k8s-version-930000: exit status 7 (32.07075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-930000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-360000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-360000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (9.838587625s)

                                                
                                                
-- stdout --
	* [no-preload-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-360000" primary control-plane node in "no-preload-360000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-360000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:05:54.666910    5787 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:05:54.667058    5787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:05:54.667061    5787 out.go:304] Setting ErrFile to fd 2...
	I0311 14:05:54.667064    5787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:05:54.667195    5787 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:05:54.668407    5787 out.go:298] Setting JSON to false
	I0311 14:05:54.684453    5787 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3925,"bootTime":1710187229,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 14:05:54.684519    5787 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 14:05:54.689476    5787 out.go:177] * [no-preload-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 14:05:54.697372    5787 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 14:05:54.701409    5787 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 14:05:54.697419    5787 notify.go:220] Checking for updates...
	I0311 14:05:54.707303    5787 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 14:05:54.710381    5787 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 14:05:54.713302    5787 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 14:05:54.716309    5787 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 14:05:54.719638    5787 config.go:182] Loaded profile config "cert-expiration-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:05:54.719697    5787 config.go:182] Loaded profile config "multinode-457000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:05:54.719748    5787 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 14:05:54.724297    5787 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 14:05:54.731355    5787 start.go:297] selected driver: qemu2
	I0311 14:05:54.731360    5787 start.go:901] validating driver "qemu2" against <nil>
	I0311 14:05:54.731367    5787 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 14:05:54.733650    5787 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 14:05:54.736295    5787 out.go:177] * Automatically selected the socket_vmnet network
	I0311 14:05:54.739420    5787 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 14:05:54.739445    5787 cni.go:84] Creating CNI manager for ""
	I0311 14:05:54.739454    5787 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 14:05:54.739472    5787 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 14:05:54.739501    5787 start.go:340] cluster config:
	{Name:no-preload-360000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:05:54.744144    5787 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:05:54.752326    5787 out.go:177] * Starting "no-preload-360000" primary control-plane node in "no-preload-360000" cluster
	I0311 14:05:54.756365    5787 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0311 14:05:54.756442    5787 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/no-preload-360000/config.json ...
	I0311 14:05:54.756457    5787 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/no-preload-360000/config.json: {Name:mk60e5420decebcad6813ffdc98050490021fb30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 14:05:54.756506    5787 cache.go:107] acquiring lock: {Name:mkc90b595b88f4abeb655b3d9dc69d8b56b767a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:05:54.756545    5787 cache.go:107] acquiring lock: {Name:mk4118d501154cb96715fe04ec4b883f1b61613f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:05:54.756577    5787 cache.go:115] /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0311 14:05:54.756587    5787 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 83.375µs
	I0311 14:05:54.756595    5787 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0311 14:05:54.756605    5787 cache.go:107] acquiring lock: {Name:mk886fbc82374b114efaea8701c8780ea04508ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:05:54.756506    5787 cache.go:107] acquiring lock: {Name:mk901552f8224dbbed9da1b953cc03962859a946 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:05:54.756600    5787 cache.go:107] acquiring lock: {Name:mka2636fa3a9c3a8287615423c8bf10ffbddeb5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:05:54.756506    5787 cache.go:107] acquiring lock: {Name:mk309dee06d31fdeb1da0eb07ff2858197b34036 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:05:54.756716    5787 start.go:360] acquireMachinesLock for no-preload-360000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:05:54.756714    5787 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0311 14:05:54.756730    5787 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0311 14:05:54.756746    5787 cache.go:107] acquiring lock: {Name:mk367900713cb3f1a26f484b51db5eae9cf05fea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:05:54.756779    5787 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 14:05:54.756753    5787 start.go:364] duration metric: took 30.542µs to acquireMachinesLock for "no-preload-360000"
	I0311 14:05:54.756648    5787 cache.go:107] acquiring lock: {Name:mk5341a7721ba39e67d53a49ae25e91baf1b15fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:05:54.756912    5787 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 14:05:54.756837    5787 start.go:93] Provisioning new machine with config: &{Name:no-preload-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:05:54.756953    5787 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:05:54.756958    5787 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 14:05:54.756946    5787 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 14:05:54.756962    5787 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0311 14:05:54.765336    5787 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 14:05:54.770921    5787 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0311 14:05:54.778531    5787 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0311 14:05:54.778826    5787 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0311 14:05:54.778864    5787 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0311 14:05:54.778928    5787 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0311 14:05:54.779130    5787 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0311 14:05:54.779165    5787 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0311 14:05:54.784091    5787 start.go:159] libmachine.API.Create for "no-preload-360000" (driver="qemu2")
	I0311 14:05:54.784109    5787 client.go:168] LocalClient.Create starting
	I0311 14:05:54.784179    5787 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:05:54.784209    5787 main.go:141] libmachine: Decoding PEM data...
	I0311 14:05:54.784219    5787 main.go:141] libmachine: Parsing certificate...
	I0311 14:05:54.784266    5787 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:05:54.784291    5787 main.go:141] libmachine: Decoding PEM data...
	I0311 14:05:54.784300    5787 main.go:141] libmachine: Parsing certificate...
	I0311 14:05:54.784632    5787 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:05:54.924156    5787 main.go:141] libmachine: Creating SSH key...
	I0311 14:05:55.048998    5787 main.go:141] libmachine: Creating Disk image...
	I0311 14:05:55.049023    5787 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:05:55.049218    5787 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/disk.qcow2
	I0311 14:05:55.062028    5787 main.go:141] libmachine: STDOUT: 
	I0311 14:05:55.062056    5787 main.go:141] libmachine: STDERR: 
	I0311 14:05:55.062106    5787 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/disk.qcow2 +20000M
	I0311 14:05:55.074279    5787 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:05:55.074300    5787 main.go:141] libmachine: STDERR: 
	I0311 14:05:55.074310    5787 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/disk.qcow2
	I0311 14:05:55.074312    5787 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:05:55.074350    5787 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:2a:6b:f4:04:cf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/disk.qcow2
	I0311 14:05:55.076370    5787 main.go:141] libmachine: STDOUT: 
	I0311 14:05:55.076385    5787 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:05:55.076412    5787 client.go:171] duration metric: took 292.300583ms to LocalClient.Create
	I0311 14:05:56.759506    5787 cache.go:162] opening:  /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0311 14:05:56.794788    5787 cache.go:162] opening:  /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0
	I0311 14:05:56.831802    5787 cache.go:162] opening:  /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0311 14:05:56.855741    5787 cache.go:162] opening:  /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0311 14:05:56.861405    5787 cache.go:162] opening:  /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0311 14:05:56.862275    5787 cache.go:162] opening:  /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0311 14:05:56.865162    5787 cache.go:162] opening:  /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0311 14:05:56.993609    5787 cache.go:157] /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0311 14:05:56.993662    5787 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 2.237132959s
	I0311 14:05:56.993693    5787 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0311 14:05:57.077280    5787 start.go:128] duration metric: took 2.320323584s to createHost
	I0311 14:05:57.077334    5787 start.go:83] releasing machines lock for "no-preload-360000", held for 2.320545375s
	W0311 14:05:57.077378    5787 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:05:57.088184    5787 out.go:177] * Deleting "no-preload-360000" in qemu2 ...
	W0311 14:05:57.116774    5787 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:05:57.116813    5787 start.go:728] Will try again in 5 seconds ...
	I0311 14:05:58.937002    5787 cache.go:157] /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0311 14:05:58.937055    5787 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.180443333s
	I0311 14:05:58.937079    5787 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0311 14:06:00.093909    5787 cache.go:157] /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0311 14:06:00.093965    5787 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 5.337527666s
	I0311 14:06:00.094017    5787 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0311 14:06:00.350740    5787 cache.go:157] /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0311 14:06:00.350793    5787 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 5.594163625s
	I0311 14:06:00.350843    5787 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0311 14:06:00.514351    5787 cache.go:157] /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0311 14:06:00.514401    5787 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 5.757975666s
	I0311 14:06:00.514443    5787 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0311 14:06:01.026704    5787 cache.go:157] /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0311 14:06:01.026758    5787 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 6.270256833s
	I0311 14:06:01.026783    5787 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0311 14:06:02.117964    5787 start.go:360] acquireMachinesLock for no-preload-360000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:06:02.118391    5787 start.go:364] duration metric: took 351.167µs to acquireMachinesLock for "no-preload-360000"
	I0311 14:06:02.118515    5787 start.go:93] Provisioning new machine with config: &{Name:no-preload-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:06:02.118782    5787 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:06:02.129279    5787 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 14:06:02.177613    5787 start.go:159] libmachine.API.Create for "no-preload-360000" (driver="qemu2")
	I0311 14:06:02.177694    5787 client.go:168] LocalClient.Create starting
	I0311 14:06:02.177825    5787 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:06:02.177891    5787 main.go:141] libmachine: Decoding PEM data...
	I0311 14:06:02.177915    5787 main.go:141] libmachine: Parsing certificate...
	I0311 14:06:02.177983    5787 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:06:02.178025    5787 main.go:141] libmachine: Decoding PEM data...
	I0311 14:06:02.178041    5787 main.go:141] libmachine: Parsing certificate...
	I0311 14:06:02.178532    5787 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:06:02.331295    5787 main.go:141] libmachine: Creating SSH key...
	I0311 14:06:02.397914    5787 main.go:141] libmachine: Creating Disk image...
	I0311 14:06:02.397919    5787 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:06:02.398094    5787 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/disk.qcow2
	I0311 14:06:02.410771    5787 main.go:141] libmachine: STDOUT: 
	I0311 14:06:02.410805    5787 main.go:141] libmachine: STDERR: 
	I0311 14:06:02.410869    5787 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/disk.qcow2 +20000M
	I0311 14:06:02.422035    5787 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:06:02.422092    5787 main.go:141] libmachine: STDERR: 
	I0311 14:06:02.422100    5787 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/disk.qcow2
	I0311 14:06:02.422105    5787 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:06:02.422138    5787 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:7c:95:21:60:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/disk.qcow2
	I0311 14:06:02.423938    5787 main.go:141] libmachine: STDOUT: 
	I0311 14:06:02.423954    5787 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:06:02.423971    5787 client.go:171] duration metric: took 246.264959ms to LocalClient.Create
	I0311 14:06:04.425074    5787 start.go:128] duration metric: took 2.306271625s to createHost
	I0311 14:06:04.425170    5787 start.go:83] releasing machines lock for "no-preload-360000", held for 2.306765375s
	W0311 14:06:04.425477    5787 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:06:04.437446    5787 out.go:177] 
	W0311 14:06:04.442100    5787 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:06:04.442173    5787 out.go:239] * 
	* 
	W0311 14:06:04.445297    5787 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:06:04.458102    5787 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-360000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-360000 -n no-preload-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-360000 -n no-preload-360000: exit status 7 (69.074ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-360000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-360000 create -f testdata/busybox.yaml: exit status 1 (28.323416ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-360000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-360000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-360000 -n no-preload-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-360000 -n no-preload-360000: exit status 7 (32.678333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-360000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-360000 -n no-preload-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-360000 -n no-preload-360000: exit status 7 (32.2255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-360000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-360000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-360000 describe deploy/metrics-server -n kube-system: exit status 1 (26.534791ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-360000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-360000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-360000 -n no-preload-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-360000 -n no-preload-360000: exit status 7 (31.917167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-360000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-360000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (5.187137291s)

                                                
                                                
-- stdout --
	* [no-preload-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-360000" primary control-plane node in "no-preload-360000" cluster
	* Restarting existing qemu2 VM for "no-preload-360000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-360000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:06:07.068656    5857 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:06:07.068791    5857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:06:07.068795    5857 out.go:304] Setting ErrFile to fd 2...
	I0311 14:06:07.068802    5857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:06:07.068928    5857 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:06:07.069939    5857 out.go:298] Setting JSON to false
	I0311 14:06:07.085887    5857 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3938,"bootTime":1710187229,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 14:06:07.085945    5857 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 14:06:07.091182    5857 out.go:177] * [no-preload-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 14:06:07.098196    5857 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 14:06:07.098255    5857 notify.go:220] Checking for updates...
	I0311 14:06:07.102210    5857 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 14:06:07.106156    5857 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 14:06:07.109219    5857 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 14:06:07.112255    5857 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 14:06:07.115161    5857 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 14:06:07.118546    5857 config.go:182] Loaded profile config "no-preload-360000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0311 14:06:07.118812    5857 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 14:06:07.123144    5857 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 14:06:07.130211    5857 start.go:297] selected driver: qemu2
	I0311 14:06:07.130217    5857 start.go:901] validating driver "qemu2" against &{Name:no-preload-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:06:07.130277    5857 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 14:06:07.132638    5857 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 14:06:07.132671    5857 cni.go:84] Creating CNI manager for ""
	I0311 14:06:07.132680    5857 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 14:06:07.132705    5857 start.go:340] cluster config:
	{Name:no-preload-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-360000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:06:07.137141    5857 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:06:07.145172    5857 out.go:177] * Starting "no-preload-360000" primary control-plane node in "no-preload-360000" cluster
	I0311 14:06:07.149211    5857 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0311 14:06:07.149269    5857 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/no-preload-360000/config.json ...
	I0311 14:06:07.149301    5857 cache.go:107] acquiring lock: {Name:mkc90b595b88f4abeb655b3d9dc69d8b56b767a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:06:07.149317    5857 cache.go:107] acquiring lock: {Name:mka2636fa3a9c3a8287615423c8bf10ffbddeb5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:06:07.149325    5857 cache.go:107] acquiring lock: {Name:mk4118d501154cb96715fe04ec4b883f1b61613f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:06:07.149365    5857 cache.go:115] /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0311 14:06:07.149370    5857 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 71.25µs
	I0311 14:06:07.149378    5857 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0311 14:06:07.149384    5857 cache.go:115] /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0311 14:06:07.149386    5857 cache.go:107] acquiring lock: {Name:mk886fbc82374b114efaea8701c8780ea04508ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:06:07.149389    5857 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 85µs
	I0311 14:06:07.149394    5857 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0311 14:06:07.149385    5857 cache.go:115] /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0311 14:06:07.149393    5857 cache.go:107] acquiring lock: {Name:mk901552f8224dbbed9da1b953cc03962859a946 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:06:07.149402    5857 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 77.75µs
	I0311 14:06:07.149480    5857 cache.go:107] acquiring lock: {Name:mk367900713cb3f1a26f484b51db5eae9cf05fea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:06:07.149485    5857 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0311 14:06:07.149304    5857 cache.go:107] acquiring lock: {Name:mk309dee06d31fdeb1da0eb07ff2858197b34036 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:06:07.149438    5857 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0311 14:06:07.149532    5857 cache.go:115] /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0311 14:06:07.149539    5857 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 77.958µs
	I0311 14:06:07.149543    5857 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0311 14:06:07.149546    5857 cache.go:115] /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0311 14:06:07.149550    5857 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 254.125µs
	I0311 14:06:07.149554    5857 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0311 14:06:07.149453    5857 cache.go:115] /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0311 14:06:07.149568    5857 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 175.292µs
	I0311 14:06:07.149410    5857 cache.go:107] acquiring lock: {Name:mk5341a7721ba39e67d53a49ae25e91baf1b15fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:06:07.149573    5857 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0311 14:06:07.149607    5857 cache.go:115] /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0311 14:06:07.149611    5857 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 202.083µs
	I0311 14:06:07.149615    5857 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0311 14:06:07.149663    5857 start.go:360] acquireMachinesLock for no-preload-360000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:06:07.149705    5857 start.go:364] duration metric: took 31.584µs to acquireMachinesLock for "no-preload-360000"
	I0311 14:06:07.149715    5857 start.go:96] Skipping create...Using existing machine configuration
	I0311 14:06:07.149722    5857 fix.go:54] fixHost starting: 
	I0311 14:06:07.149837    5857 fix.go:112] recreateIfNeeded on no-preload-360000: state=Stopped err=<nil>
	W0311 14:06:07.149846    5857 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 14:06:07.158166    5857 out.go:177] * Restarting existing qemu2 VM for "no-preload-360000" ...
	I0311 14:06:07.162198    5857 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:7c:95:21:60:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/disk.qcow2
	I0311 14:06:07.162788    5857 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0311 14:06:07.164468    5857 main.go:141] libmachine: STDOUT: 
	I0311 14:06:07.164507    5857 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:06:07.164535    5857 fix.go:56] duration metric: took 14.813708ms for fixHost
	I0311 14:06:07.164541    5857 start.go:83] releasing machines lock for "no-preload-360000", held for 14.831209ms
	W0311 14:06:07.164548    5857 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:06:07.164575    5857 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:06:07.164581    5857 start.go:728] Will try again in 5 seconds ...
	I0311 14:06:09.126803    5857 cache.go:162] opening:  /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0
	I0311 14:06:12.164673    5857 start.go:360] acquireMachinesLock for no-preload-360000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:06:12.164909    5857 start.go:364] duration metric: took 163.875µs to acquireMachinesLock for "no-preload-360000"
	I0311 14:06:12.164974    5857 start.go:96] Skipping create...Using existing machine configuration
	I0311 14:06:12.164990    5857 fix.go:54] fixHost starting: 
	I0311 14:06:12.165400    5857 fix.go:112] recreateIfNeeded on no-preload-360000: state=Stopped err=<nil>
	W0311 14:06:12.165415    5857 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 14:06:12.169619    5857 out.go:177] * Restarting existing qemu2 VM for "no-preload-360000" ...
	I0311 14:06:12.175855    5857 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:7c:95:21:60:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/no-preload-360000/disk.qcow2
	I0311 14:06:12.186011    5857 main.go:141] libmachine: STDOUT: 
	I0311 14:06:12.186079    5857 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:06:12.186157    5857 fix.go:56] duration metric: took 21.164333ms for fixHost
	I0311 14:06:12.186180    5857 start.go:83] releasing machines lock for "no-preload-360000", held for 21.25675ms
	W0311 14:06:12.186425    5857 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-360000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-360000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:06:12.194655    5857 out.go:177] 
	W0311 14:06:12.198480    5857 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:06:12.198542    5857 out.go:239] * 
	* 
	W0311 14:06:12.201322    5857 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:06:12.214634    5857 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-360000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-360000 -n no-preload-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-360000 -n no-preload-360000: exit status 7 (71.850708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-360000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-360000 -n no-preload-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-360000 -n no-preload-360000: exit status 7 (35.971125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-360000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-360000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-360000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.56675ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-360000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-360000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-360000 -n no-preload-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-360000 -n no-preload-360000: exit status 7 (32.642125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-360000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-360000 -n no-preload-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-360000 -n no-preload-360000: exit status 7 (32.287458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-360000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-360000 --alsologtostderr -v=1: exit status 83 (45.191ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-360000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-360000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:06:12.499306    5883 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:06:12.499460    5883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:06:12.499464    5883 out.go:304] Setting ErrFile to fd 2...
	I0311 14:06:12.499466    5883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:06:12.499588    5883 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:06:12.499824    5883 out.go:298] Setting JSON to false
	I0311 14:06:12.499832    5883 mustload.go:65] Loading cluster: no-preload-360000
	I0311 14:06:12.500023    5883 config.go:182] Loaded profile config "no-preload-360000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0311 14:06:12.504061    5883 out.go:177] * The control-plane node no-preload-360000 host is not running: state=Stopped
	I0311 14:06:12.507907    5883 out.go:177]   To start a cluster, run: "minikube start -p no-preload-360000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-360000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-360000 -n no-preload-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-360000 -n no-preload-360000: exit status 7 (32.694542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-360000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-360000 -n no-preload-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-360000 -n no-preload-360000: exit status 7 (32.536583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-026000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
E0311 14:06:14.135327    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-026000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (9.818084084s)

                                                
                                                
-- stdout --
	* [embed-certs-026000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-026000" primary control-plane node in "embed-certs-026000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-026000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:06:12.977426    5906 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:06:12.977557    5906 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:06:12.977560    5906 out.go:304] Setting ErrFile to fd 2...
	I0311 14:06:12.977562    5906 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:06:12.977685    5906 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:06:12.978778    5906 out.go:298] Setting JSON to false
	I0311 14:06:12.994901    5906 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3943,"bootTime":1710187229,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 14:06:12.994957    5906 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 14:06:12.999922    5906 out.go:177] * [embed-certs-026000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 14:06:13.011919    5906 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 14:06:13.007060    5906 notify.go:220] Checking for updates...
	I0311 14:06:13.019995    5906 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 14:06:13.022928    5906 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 14:06:13.026978    5906 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 14:06:13.030836    5906 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 14:06:13.034995    5906 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 14:06:13.038420    5906 config.go:182] Loaded profile config "cert-expiration-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:06:13.038481    5906 config.go:182] Loaded profile config "multinode-457000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:06:13.038533    5906 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 14:06:13.041911    5906 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 14:06:13.048981    5906 start.go:297] selected driver: qemu2
	I0311 14:06:13.048988    5906 start.go:901] validating driver "qemu2" against <nil>
	I0311 14:06:13.048997    5906 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 14:06:13.051470    5906 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 14:06:13.053091    5906 out.go:177] * Automatically selected the socket_vmnet network
	I0311 14:06:13.056087    5906 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 14:06:13.056145    5906 cni.go:84] Creating CNI manager for ""
	I0311 14:06:13.056152    5906 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 14:06:13.056157    5906 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 14:06:13.056185    5906 start.go:340] cluster config:
	{Name:embed-certs-026000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-026000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:06:13.060856    5906 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:06:13.068974    5906 out.go:177] * Starting "embed-certs-026000" primary control-plane node in "embed-certs-026000" cluster
	I0311 14:06:13.072936    5906 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 14:06:13.072953    5906 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 14:06:13.072966    5906 cache.go:56] Caching tarball of preloaded images
	I0311 14:06:13.073029    5906 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 14:06:13.073036    5906 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 14:06:13.073106    5906 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/embed-certs-026000/config.json ...
	I0311 14:06:13.073118    5906 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/embed-certs-026000/config.json: {Name:mkb6b167b08b788f6293bbc2b263746b91ea3b3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 14:06:13.073335    5906 start.go:360] acquireMachinesLock for embed-certs-026000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:06:13.073367    5906 start.go:364] duration metric: took 26.416µs to acquireMachinesLock for "embed-certs-026000"
	I0311 14:06:13.073379    5906 start.go:93] Provisioning new machine with config: &{Name:embed-certs-026000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:embed-certs-026000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:06:13.073413    5906 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:06:13.080990    5906 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 14:06:13.099076    5906 start.go:159] libmachine.API.Create for "embed-certs-026000" (driver="qemu2")
	I0311 14:06:13.099100    5906 client.go:168] LocalClient.Create starting
	I0311 14:06:13.099153    5906 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:06:13.099182    5906 main.go:141] libmachine: Decoding PEM data...
	I0311 14:06:13.099192    5906 main.go:141] libmachine: Parsing certificate...
	I0311 14:06:13.099235    5906 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:06:13.099257    5906 main.go:141] libmachine: Decoding PEM data...
	I0311 14:06:13.099265    5906 main.go:141] libmachine: Parsing certificate...
	I0311 14:06:13.099608    5906 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:06:13.234292    5906 main.go:141] libmachine: Creating SSH key...
	I0311 14:06:13.386317    5906 main.go:141] libmachine: Creating Disk image...
	I0311 14:06:13.386326    5906 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:06:13.386514    5906 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/disk.qcow2
	I0311 14:06:13.398849    5906 main.go:141] libmachine: STDOUT: 
	I0311 14:06:13.398871    5906 main.go:141] libmachine: STDERR: 
	I0311 14:06:13.398919    5906 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/disk.qcow2 +20000M
	I0311 14:06:13.409414    5906 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:06:13.409431    5906 main.go:141] libmachine: STDERR: 
	I0311 14:06:13.409451    5906 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/disk.qcow2
	I0311 14:06:13.409456    5906 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:06:13.409484    5906 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:2e:78:20:af:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/disk.qcow2
	I0311 14:06:13.411185    5906 main.go:141] libmachine: STDOUT: 
	I0311 14:06:13.411199    5906 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:06:13.411219    5906 client.go:171] duration metric: took 312.121458ms to LocalClient.Create
	I0311 14:06:15.413354    5906 start.go:128] duration metric: took 2.339967584s to createHost
	I0311 14:06:15.413440    5906 start.go:83] releasing machines lock for "embed-certs-026000", held for 2.340087292s
	W0311 14:06:15.413509    5906 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:06:15.428819    5906 out.go:177] * Deleting "embed-certs-026000" in qemu2 ...
	W0311 14:06:15.453517    5906 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:06:15.453549    5906 start.go:728] Will try again in 5 seconds ...
	I0311 14:06:20.455646    5906 start.go:360] acquireMachinesLock for embed-certs-026000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:06:20.456102    5906 start.go:364] duration metric: took 336.667µs to acquireMachinesLock for "embed-certs-026000"
	I0311 14:06:20.456227    5906 start.go:93] Provisioning new machine with config: &{Name:embed-certs-026000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:embed-certs-026000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:06:20.456492    5906 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:06:20.467135    5906 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 14:06:20.516525    5906 start.go:159] libmachine.API.Create for "embed-certs-026000" (driver="qemu2")
	I0311 14:06:20.516565    5906 client.go:168] LocalClient.Create starting
	I0311 14:06:20.516666    5906 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:06:20.516726    5906 main.go:141] libmachine: Decoding PEM data...
	I0311 14:06:20.516743    5906 main.go:141] libmachine: Parsing certificate...
	I0311 14:06:20.516810    5906 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:06:20.516859    5906 main.go:141] libmachine: Decoding PEM data...
	I0311 14:06:20.516875    5906 main.go:141] libmachine: Parsing certificate...
	I0311 14:06:20.517363    5906 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:06:20.660995    5906 main.go:141] libmachine: Creating SSH key...
	I0311 14:06:20.695405    5906 main.go:141] libmachine: Creating Disk image...
	I0311 14:06:20.695414    5906 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:06:20.695573    5906 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/disk.qcow2
	I0311 14:06:20.707567    5906 main.go:141] libmachine: STDOUT: 
	I0311 14:06:20.707598    5906 main.go:141] libmachine: STDERR: 
	I0311 14:06:20.707661    5906 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/disk.qcow2 +20000M
	I0311 14:06:20.718365    5906 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:06:20.718383    5906 main.go:141] libmachine: STDERR: 
	I0311 14:06:20.718394    5906 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/disk.qcow2
	I0311 14:06:20.718398    5906 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:06:20.718433    5906 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:65:49:65:4d:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/disk.qcow2
	I0311 14:06:20.720152    5906 main.go:141] libmachine: STDOUT: 
	I0311 14:06:20.720168    5906 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:06:20.720180    5906 client.go:171] duration metric: took 203.613083ms to LocalClient.Create
	I0311 14:06:22.722310    5906 start.go:128] duration metric: took 2.265843167s to createHost
	I0311 14:06:22.722373    5906 start.go:83] releasing machines lock for "embed-certs-026000", held for 2.26629675s
	W0311 14:06:22.722727    5906 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-026000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-026000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:06:22.734551    5906 out.go:177] 
	W0311 14:06:22.738645    5906 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:06:22.738678    5906 out.go:239] * 
	* 
	W0311 14:06:22.740970    5906 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:06:22.751371    5906 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-026000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000: exit status 7 (68.192667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-026000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-026000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-026000 create -f testdata/busybox.yaml: exit status 1 (28.571792ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-026000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-026000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000: exit status 7 (32.519334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-026000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000: exit status 7 (31.858625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-026000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-026000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-026000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-026000 describe deploy/metrics-server -n kube-system: exit status 1 (26.657167ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-026000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-026000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000: exit status 7 (32.871833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-026000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-026000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-026000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (5.182242041s)

                                                
                                                
-- stdout --
	* [embed-certs-026000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-026000" primary control-plane node in "embed-certs-026000" cluster
	* Restarting existing qemu2 VM for "embed-certs-026000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-026000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:06:25.317530    5948 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:06:25.317660    5948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:06:25.317664    5948 out.go:304] Setting ErrFile to fd 2...
	I0311 14:06:25.317667    5948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:06:25.317782    5948 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:06:25.318741    5948 out.go:298] Setting JSON to false
	I0311 14:06:25.334998    5948 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3956,"bootTime":1710187229,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 14:06:25.335068    5948 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 14:06:25.340056    5948 out.go:177] * [embed-certs-026000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 14:06:25.346963    5948 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 14:06:25.351048    5948 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 14:06:25.347010    5948 notify.go:220] Checking for updates...
	I0311 14:06:25.358082    5948 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 14:06:25.361061    5948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 14:06:25.364054    5948 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 14:06:25.367015    5948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 14:06:25.370260    5948 config.go:182] Loaded profile config "embed-certs-026000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:06:25.370512    5948 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 14:06:25.375075    5948 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 14:06:25.382023    5948 start.go:297] selected driver: qemu2
	I0311 14:06:25.382030    5948 start.go:901] validating driver "qemu2" against &{Name:embed-certs-026000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:embed-certs-026000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:06:25.382098    5948 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 14:06:25.384398    5948 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 14:06:25.384439    5948 cni.go:84] Creating CNI manager for ""
	I0311 14:06:25.384446    5948 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 14:06:25.384469    5948 start.go:340] cluster config:
	{Name:embed-certs-026000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-026000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:06:25.388790    5948 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:06:25.396059    5948 out.go:177] * Starting "embed-certs-026000" primary control-plane node in "embed-certs-026000" cluster
	I0311 14:06:25.400022    5948 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 14:06:25.400036    5948 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 14:06:25.400046    5948 cache.go:56] Caching tarball of preloaded images
	I0311 14:06:25.400094    5948 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 14:06:25.400099    5948 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 14:06:25.400153    5948 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/embed-certs-026000/config.json ...
	I0311 14:06:25.400646    5948 start.go:360] acquireMachinesLock for embed-certs-026000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:06:25.400672    5948 start.go:364] duration metric: took 20.042µs to acquireMachinesLock for "embed-certs-026000"
	I0311 14:06:25.400681    5948 start.go:96] Skipping create...Using existing machine configuration
	I0311 14:06:25.400686    5948 fix.go:54] fixHost starting: 
	I0311 14:06:25.400803    5948 fix.go:112] recreateIfNeeded on embed-certs-026000: state=Stopped err=<nil>
	W0311 14:06:25.400811    5948 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 14:06:25.408056    5948 out.go:177] * Restarting existing qemu2 VM for "embed-certs-026000" ...
	I0311 14:06:25.410944    5948 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:65:49:65:4d:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/disk.qcow2
	I0311 14:06:25.412906    5948 main.go:141] libmachine: STDOUT: 
	I0311 14:06:25.412925    5948 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:06:25.412953    5948 fix.go:56] duration metric: took 12.26725ms for fixHost
	I0311 14:06:25.412957    5948 start.go:83] releasing machines lock for "embed-certs-026000", held for 12.281083ms
	W0311 14:06:25.412963    5948 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:06:25.412997    5948 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:06:25.413001    5948 start.go:728] Will try again in 5 seconds ...
	I0311 14:06:30.415023    5948 start.go:360] acquireMachinesLock for embed-certs-026000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:06:30.415096    5948 start.go:364] duration metric: took 47.667µs to acquireMachinesLock for "embed-certs-026000"
	I0311 14:06:30.415108    5948 start.go:96] Skipping create...Using existing machine configuration
	I0311 14:06:30.415112    5948 fix.go:54] fixHost starting: 
	I0311 14:06:30.415249    5948 fix.go:112] recreateIfNeeded on embed-certs-026000: state=Stopped err=<nil>
	W0311 14:06:30.415255    5948 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 14:06:30.420681    5948 out.go:177] * Restarting existing qemu2 VM for "embed-certs-026000" ...
	I0311 14:06:30.423789    5948 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:65:49:65:4d:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/embed-certs-026000/disk.qcow2
	I0311 14:06:30.425938    5948 main.go:141] libmachine: STDOUT: 
	I0311 14:06:30.425955    5948 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:06:30.425977    5948 fix.go:56] duration metric: took 10.864583ms for fixHost
	I0311 14:06:30.425981    5948 start.go:83] releasing machines lock for "embed-certs-026000", held for 10.879834ms
	W0311 14:06:30.426026    5948 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-026000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-026000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:06:30.437754    5948 out.go:177] 
	W0311 14:06:30.440842    5948 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:06:30.440851    5948 out.go:239] * 
	* 
	W0311 14:06:30.441414    5948 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:06:30.457802    5948 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-026000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000: exit status 7 (36.447875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-026000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-026000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000: exit status 7 (31.110375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-026000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-026000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-026000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-026000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.673666ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-026000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-026000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000: exit status 7 (31.747041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-026000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-026000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000: exit status 7 (31.20475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-026000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-026000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-026000 --alsologtostderr -v=1: exit status 83 (42.406334ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-026000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-026000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:06:30.699489    5974 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:06:30.699633    5974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:06:30.699642    5974 out.go:304] Setting ErrFile to fd 2...
	I0311 14:06:30.699644    5974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:06:30.699766    5974 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:06:30.699997    5974 out.go:298] Setting JSON to false
	I0311 14:06:30.700007    5974 mustload.go:65] Loading cluster: embed-certs-026000
	I0311 14:06:30.700172    5974 config.go:182] Loaded profile config "embed-certs-026000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:06:30.704861    5974 out.go:177] * The control-plane node embed-certs-026000 host is not running: state=Stopped
	I0311 14:06:30.708787    5974 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-026000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-026000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000: exit status 7 (31.254625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-026000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000: exit status 7 (31.311667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-026000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-406000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-406000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (9.803340583s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-406000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-406000" primary control-plane node in "default-k8s-diff-port-406000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-406000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:06:31.408335    6009 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:06:31.408468    6009 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:06:31.408472    6009 out.go:304] Setting ErrFile to fd 2...
	I0311 14:06:31.408474    6009 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:06:31.408604    6009 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:06:31.409715    6009 out.go:298] Setting JSON to false
	I0311 14:06:31.426421    6009 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3962,"bootTime":1710187229,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 14:06:31.426499    6009 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 14:06:31.430241    6009 out.go:177] * [default-k8s-diff-port-406000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 14:06:31.437164    6009 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 14:06:31.440215    6009 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 14:06:31.437256    6009 notify.go:220] Checking for updates...
	I0311 14:06:31.448202    6009 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 14:06:31.452199    6009 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 14:06:31.459223    6009 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 14:06:31.462194    6009 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 14:06:31.465659    6009 config.go:182] Loaded profile config "cert-expiration-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:06:31.465725    6009 config.go:182] Loaded profile config "multinode-457000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:06:31.465776    6009 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 14:06:31.470204    6009 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 14:06:31.477248    6009 start.go:297] selected driver: qemu2
	I0311 14:06:31.477253    6009 start.go:901] validating driver "qemu2" against <nil>
	I0311 14:06:31.477259    6009 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 14:06:31.479445    6009 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 14:06:31.482130    6009 out.go:177] * Automatically selected the socket_vmnet network
	I0311 14:06:31.485292    6009 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 14:06:31.485313    6009 cni.go:84] Creating CNI manager for ""
	I0311 14:06:31.485318    6009 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 14:06:31.485322    6009 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 14:06:31.485354    6009 start.go:340] cluster config:
	{Name:default-k8s-diff-port-406000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-406000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:06:31.489456    6009 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:06:31.497204    6009 out.go:177] * Starting "default-k8s-diff-port-406000" primary control-plane node in "default-k8s-diff-port-406000" cluster
	I0311 14:06:31.501253    6009 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 14:06:31.501270    6009 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 14:06:31.501275    6009 cache.go:56] Caching tarball of preloaded images
	I0311 14:06:31.501329    6009 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 14:06:31.501335    6009 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 14:06:31.501386    6009 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/default-k8s-diff-port-406000/config.json ...
	I0311 14:06:31.501396    6009 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/default-k8s-diff-port-406000/config.json: {Name:mk2f5a589b9cc91b23d23823ab0efe7687ac55e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 14:06:31.501586    6009 start.go:360] acquireMachinesLock for default-k8s-diff-port-406000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:06:31.501616    6009 start.go:364] duration metric: took 22.916µs to acquireMachinesLock for "default-k8s-diff-port-406000"
	I0311 14:06:31.501627    6009 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-406000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-406000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:06:31.501650    6009 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:06:31.510213    6009 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 14:06:31.524893    6009 start.go:159] libmachine.API.Create for "default-k8s-diff-port-406000" (driver="qemu2")
	I0311 14:06:31.524918    6009 client.go:168] LocalClient.Create starting
	I0311 14:06:31.524973    6009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:06:31.525001    6009 main.go:141] libmachine: Decoding PEM data...
	I0311 14:06:31.525010    6009 main.go:141] libmachine: Parsing certificate...
	I0311 14:06:31.525050    6009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:06:31.525070    6009 main.go:141] libmachine: Decoding PEM data...
	I0311 14:06:31.525076    6009 main.go:141] libmachine: Parsing certificate...
	I0311 14:06:31.525424    6009 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:06:31.662731    6009 main.go:141] libmachine: Creating SSH key...
	I0311 14:06:31.736991    6009 main.go:141] libmachine: Creating Disk image...
	I0311 14:06:31.737000    6009 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:06:31.737202    6009 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2
	I0311 14:06:31.749674    6009 main.go:141] libmachine: STDOUT: 
	I0311 14:06:31.749694    6009 main.go:141] libmachine: STDERR: 
	I0311 14:06:31.749744    6009 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2 +20000M
	I0311 14:06:31.761097    6009 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:06:31.761120    6009 main.go:141] libmachine: STDERR: 
	I0311 14:06:31.761136    6009 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2
	I0311 14:06:31.761142    6009 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:06:31.761170    6009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:8a:aa:f9:4b:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2
	I0311 14:06:31.763122    6009 main.go:141] libmachine: STDOUT: 
	I0311 14:06:31.763138    6009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:06:31.763154    6009 client.go:171] duration metric: took 238.239125ms to LocalClient.Create
	I0311 14:06:33.765402    6009 start.go:128] duration metric: took 2.26376325s to createHost
	I0311 14:06:33.765506    6009 start.go:83] releasing machines lock for "default-k8s-diff-port-406000", held for 2.263940292s
	W0311 14:06:33.765565    6009 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:06:33.778846    6009 out.go:177] * Deleting "default-k8s-diff-port-406000" in qemu2 ...
	W0311 14:06:33.806777    6009 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:06:33.806814    6009 start.go:728] Will try again in 5 seconds ...
	I0311 14:06:38.808928    6009 start.go:360] acquireMachinesLock for default-k8s-diff-port-406000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:06:38.809293    6009 start.go:364] duration metric: took 290.375µs to acquireMachinesLock for "default-k8s-diff-port-406000"
	I0311 14:06:38.809424    6009 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-406000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-406000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:06:38.809768    6009 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:06:38.818443    6009 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 14:06:38.867944    6009 start.go:159] libmachine.API.Create for "default-k8s-diff-port-406000" (driver="qemu2")
	I0311 14:06:38.867994    6009 client.go:168] LocalClient.Create starting
	I0311 14:06:38.868094    6009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:06:38.868150    6009 main.go:141] libmachine: Decoding PEM data...
	I0311 14:06:38.868173    6009 main.go:141] libmachine: Parsing certificate...
	I0311 14:06:38.868262    6009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:06:38.868303    6009 main.go:141] libmachine: Decoding PEM data...
	I0311 14:06:38.868320    6009 main.go:141] libmachine: Parsing certificate...
	I0311 14:06:38.868834    6009 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:06:39.016531    6009 main.go:141] libmachine: Creating SSH key...
	I0311 14:06:39.108344    6009 main.go:141] libmachine: Creating Disk image...
	I0311 14:06:39.108349    6009 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:06:39.108532    6009 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2
	I0311 14:06:39.121054    6009 main.go:141] libmachine: STDOUT: 
	I0311 14:06:39.121079    6009 main.go:141] libmachine: STDERR: 
	I0311 14:06:39.121145    6009 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2 +20000M
	I0311 14:06:39.131718    6009 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:06:39.131740    6009 main.go:141] libmachine: STDERR: 
	I0311 14:06:39.131752    6009 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2
	I0311 14:06:39.131758    6009 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:06:39.131786    6009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:c9:a8:3d:fb:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2
	I0311 14:06:39.133479    6009 main.go:141] libmachine: STDOUT: 
	I0311 14:06:39.133499    6009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:06:39.133512    6009 client.go:171] duration metric: took 265.518834ms to LocalClient.Create
	I0311 14:06:41.135641    6009 start.go:128] duration metric: took 2.325882917s to createHost
	I0311 14:06:41.135705    6009 start.go:83] releasing machines lock for "default-k8s-diff-port-406000", held for 2.326451792s
	W0311 14:06:41.136117    6009 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-406000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-406000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:06:41.146676    6009 out.go:177] 
	W0311 14:06:41.153863    6009 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:06:41.153899    6009 out.go:239] * 
	* 
	W0311 14:06:41.156585    6009 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:06:41.164640    6009 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-406000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000: exit status 7 (67.393958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-406000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-440000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-440000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (9.848615708s)

                                                
                                                
-- stdout --
	* [newest-cni-440000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-440000" primary control-plane node in "newest-cni-440000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-440000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:06:35.742986    6027 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:06:35.743125    6027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:06:35.743128    6027 out.go:304] Setting ErrFile to fd 2...
	I0311 14:06:35.743130    6027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:06:35.743258    6027 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:06:35.744340    6027 out.go:298] Setting JSON to false
	I0311 14:06:35.760597    6027 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3966,"bootTime":1710187229,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 14:06:35.760659    6027 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 14:06:35.766479    6027 out.go:177] * [newest-cni-440000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 14:06:35.773387    6027 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 14:06:35.778447    6027 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 14:06:35.773441    6027 notify.go:220] Checking for updates...
	I0311 14:06:35.785476    6027 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 14:06:35.788476    6027 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 14:06:35.791467    6027 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 14:06:35.794419    6027 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 14:06:35.797824    6027 config.go:182] Loaded profile config "default-k8s-diff-port-406000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:06:35.797895    6027 config.go:182] Loaded profile config "multinode-457000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:06:35.797944    6027 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 14:06:35.802465    6027 out.go:177] * Using the qemu2 driver based on user configuration
	I0311 14:06:35.809443    6027 start.go:297] selected driver: qemu2
	I0311 14:06:35.809449    6027 start.go:901] validating driver "qemu2" against <nil>
	I0311 14:06:35.809457    6027 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 14:06:35.811781    6027 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0311 14:06:35.811809    6027 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0311 14:06:35.819347    6027 out.go:177] * Automatically selected the socket_vmnet network
	I0311 14:06:35.822540    6027 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0311 14:06:35.822577    6027 cni.go:84] Creating CNI manager for ""
	I0311 14:06:35.822586    6027 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 14:06:35.822591    6027 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 14:06:35.822618    6027 start.go:340] cluster config:
	{Name:newest-cni-440000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-440000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:06:35.827352    6027 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:06:35.834382    6027 out.go:177] * Starting "newest-cni-440000" primary control-plane node in "newest-cni-440000" cluster
	I0311 14:06:35.838472    6027 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0311 14:06:35.838495    6027 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0311 14:06:35.838502    6027 cache.go:56] Caching tarball of preloaded images
	I0311 14:06:35.838554    6027 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 14:06:35.838566    6027 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0311 14:06:35.838641    6027 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/newest-cni-440000/config.json ...
	I0311 14:06:35.838652    6027 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/newest-cni-440000/config.json: {Name:mk8bf4005b78124c9b2d659ceb8beca778a97f0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 14:06:35.838869    6027 start.go:360] acquireMachinesLock for newest-cni-440000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:06:35.838902    6027 start.go:364] duration metric: took 26.625µs to acquireMachinesLock for "newest-cni-440000"
	I0311 14:06:35.838915    6027 start.go:93] Provisioning new machine with config: &{Name:newest-cni-440000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-440000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:06:35.838950    6027 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:06:35.847447    6027 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 14:06:35.865765    6027 start.go:159] libmachine.API.Create for "newest-cni-440000" (driver="qemu2")
	I0311 14:06:35.865803    6027 client.go:168] LocalClient.Create starting
	I0311 14:06:35.865872    6027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:06:35.865903    6027 main.go:141] libmachine: Decoding PEM data...
	I0311 14:06:35.865915    6027 main.go:141] libmachine: Parsing certificate...
	I0311 14:06:35.865966    6027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:06:35.865990    6027 main.go:141] libmachine: Decoding PEM data...
	I0311 14:06:35.865999    6027 main.go:141] libmachine: Parsing certificate...
	I0311 14:06:35.866388    6027 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:06:36.000285    6027 main.go:141] libmachine: Creating SSH key...
	I0311 14:06:36.060927    6027 main.go:141] libmachine: Creating Disk image...
	I0311 14:06:36.060932    6027 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:06:36.061091    6027 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/disk.qcow2
	I0311 14:06:36.073394    6027 main.go:141] libmachine: STDOUT: 
	I0311 14:06:36.073420    6027 main.go:141] libmachine: STDERR: 
	I0311 14:06:36.073472    6027 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/disk.qcow2 +20000M
	I0311 14:06:36.084135    6027 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:06:36.084157    6027 main.go:141] libmachine: STDERR: 
	I0311 14:06:36.084171    6027 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/disk.qcow2
	I0311 14:06:36.084175    6027 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:06:36.084220    6027 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:99:b4:64:f9:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/disk.qcow2
	I0311 14:06:36.085918    6027 main.go:141] libmachine: STDOUT: 
	I0311 14:06:36.085934    6027 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:06:36.085951    6027 client.go:171] duration metric: took 220.148583ms to LocalClient.Create
	I0311 14:06:38.088117    6027 start.go:128] duration metric: took 2.249201209s to createHost
	I0311 14:06:38.088214    6027 start.go:83] releasing machines lock for "newest-cni-440000", held for 2.249363042s
	W0311 14:06:38.088254    6027 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:06:38.098320    6027 out.go:177] * Deleting "newest-cni-440000" in qemu2 ...
	W0311 14:06:38.127296    6027 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:06:38.127324    6027 start.go:728] Will try again in 5 seconds ...
	I0311 14:06:43.129389    6027 start.go:360] acquireMachinesLock for newest-cni-440000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:06:43.129802    6027 start.go:364] duration metric: took 269.208µs to acquireMachinesLock for "newest-cni-440000"
	I0311 14:06:43.129932    6027 start.go:93] Provisioning new machine with config: &{Name:newest-cni-440000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-440000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0311 14:06:43.130170    6027 start.go:125] createHost starting for "" (driver="qemu2")
	I0311 14:06:43.140720    6027 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0311 14:06:43.189275    6027 start.go:159] libmachine.API.Create for "newest-cni-440000" (driver="qemu2")
	I0311 14:06:43.189322    6027 client.go:168] LocalClient.Create starting
	I0311 14:06:43.189417    6027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/ca.pem
	I0311 14:06:43.189473    6027 main.go:141] libmachine: Decoding PEM data...
	I0311 14:06:43.189492    6027 main.go:141] libmachine: Parsing certificate...
	I0311 14:06:43.189553    6027 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18358-1220/.minikube/certs/cert.pem
	I0311 14:06:43.189580    6027 main.go:141] libmachine: Decoding PEM data...
	I0311 14:06:43.189592    6027 main.go:141] libmachine: Parsing certificate...
	I0311 14:06:43.190212    6027 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso...
	I0311 14:06:43.338105    6027 main.go:141] libmachine: Creating SSH key...
	I0311 14:06:43.482783    6027 main.go:141] libmachine: Creating Disk image...
	I0311 14:06:43.482794    6027 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0311 14:06:43.483003    6027 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/disk.qcow2.raw /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/disk.qcow2
	I0311 14:06:43.495144    6027 main.go:141] libmachine: STDOUT: 
	I0311 14:06:43.495170    6027 main.go:141] libmachine: STDERR: 
	I0311 14:06:43.495240    6027 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/disk.qcow2 +20000M
	I0311 14:06:43.505898    6027 main.go:141] libmachine: STDOUT: Image resized.
	
	I0311 14:06:43.505915    6027 main.go:141] libmachine: STDERR: 
	I0311 14:06:43.505928    6027 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/disk.qcow2
	I0311 14:06:43.505935    6027 main.go:141] libmachine: Starting QEMU VM...
	I0311 14:06:43.505978    6027 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:8f:0d:df:33:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/disk.qcow2
	I0311 14:06:43.507681    6027 main.go:141] libmachine: STDOUT: 
	I0311 14:06:43.507697    6027 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:06:43.507713    6027 client.go:171] duration metric: took 318.392458ms to LocalClient.Create
	I0311 14:06:45.509868    6027 start.go:128] duration metric: took 2.379716792s to createHost
	I0311 14:06:45.509966    6027 start.go:83] releasing machines lock for "newest-cni-440000", held for 2.38020475s
	W0311 14:06:45.510352    6027 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-440000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-440000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:06:45.525018    6027 out.go:177] 
	W0311 14:06:45.532097    6027 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:06:45.532150    6027 out.go:239] * 
	* 
	W0311 14:06:45.534858    6027 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:06:45.545066    6027 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-440000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-440000 -n newest-cni-440000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-440000 -n newest-cni-440000: exit status 7 (68.46575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-440000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-406000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-406000 create -f testdata/busybox.yaml: exit status 1 (29.737916ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-406000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-406000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000: exit status 7 (31.638416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-406000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000: exit status 7 (30.917625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-406000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-406000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-406000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-406000 describe deploy/metrics-server -n kube-system: exit status 1 (26.715375ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-406000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-406000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000: exit status 7 (31.341792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-406000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-406000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-406000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (5.415796167s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-406000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-406000" primary control-plane node in "default-k8s-diff-port-406000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-406000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-406000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:06:45.220059    6084 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:06:45.220209    6084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:06:45.220212    6084 out.go:304] Setting ErrFile to fd 2...
	I0311 14:06:45.220214    6084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:06:45.220329    6084 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:06:45.221354    6084 out.go:298] Setting JSON to false
	I0311 14:06:45.237114    6084 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3976,"bootTime":1710187229,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 14:06:45.237176    6084 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 14:06:45.239725    6084 out.go:177] * [default-k8s-diff-port-406000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 14:06:45.247373    6084 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 14:06:45.250284    6084 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 14:06:45.247443    6084 notify.go:220] Checking for updates...
	I0311 14:06:45.254308    6084 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 14:06:45.257321    6084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 14:06:45.260201    6084 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 14:06:45.263318    6084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 14:06:45.266646    6084 config.go:182] Loaded profile config "default-k8s-diff-port-406000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:06:45.266920    6084 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 14:06:45.270274    6084 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 14:06:45.277293    6084 start.go:297] selected driver: qemu2
	I0311 14:06:45.277299    6084 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-406000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-406000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:06:45.277342    6084 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 14:06:45.279542    6084 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 14:06:45.279569    6084 cni.go:84] Creating CNI manager for ""
	I0311 14:06:45.279576    6084 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 14:06:45.279599    6084 start.go:340] cluster config:
	{Name:default-k8s-diff-port-406000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-406000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:06:45.283743    6084 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:06:45.292307    6084 out.go:177] * Starting "default-k8s-diff-port-406000" primary control-plane node in "default-k8s-diff-port-406000" cluster
	I0311 14:06:45.297209    6084 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 14:06:45.297222    6084 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 14:06:45.297232    6084 cache.go:56] Caching tarball of preloaded images
	I0311 14:06:45.297283    6084 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 14:06:45.297288    6084 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 14:06:45.297340    6084 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/default-k8s-diff-port-406000/config.json ...
	I0311 14:06:45.297817    6084 start.go:360] acquireMachinesLock for default-k8s-diff-port-406000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:06:45.510096    6084 start.go:364] duration metric: took 212.245958ms to acquireMachinesLock for "default-k8s-diff-port-406000"
	I0311 14:06:45.510306    6084 start.go:96] Skipping create...Using existing machine configuration
	I0311 14:06:45.510336    6084 fix.go:54] fixHost starting: 
	I0311 14:06:45.511018    6084 fix.go:112] recreateIfNeeded on default-k8s-diff-port-406000: state=Stopped err=<nil>
	W0311 14:06:45.511064    6084 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 14:06:45.525018    6084 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-406000" ...
	I0311 14:06:45.529118    6084 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:c9:a8:3d:fb:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2
	I0311 14:06:45.539179    6084 main.go:141] libmachine: STDOUT: 
	I0311 14:06:45.539300    6084 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:06:45.539480    6084 fix.go:56] duration metric: took 29.141917ms for fixHost
	I0311 14:06:45.539504    6084 start.go:83] releasing machines lock for "default-k8s-diff-port-406000", held for 29.354958ms
	W0311 14:06:45.539560    6084 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:06:45.539707    6084 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:06:45.539723    6084 start.go:728] Will try again in 5 seconds ...
	I0311 14:06:50.541955    6084 start.go:360] acquireMachinesLock for default-k8s-diff-port-406000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:06:50.542370    6084 start.go:364] duration metric: took 298.375µs to acquireMachinesLock for "default-k8s-diff-port-406000"
	I0311 14:06:50.542442    6084 start.go:96] Skipping create...Using existing machine configuration
	I0311 14:06:50.542465    6084 fix.go:54] fixHost starting: 
	I0311 14:06:50.543177    6084 fix.go:112] recreateIfNeeded on default-k8s-diff-port-406000: state=Stopped err=<nil>
	W0311 14:06:50.543205    6084 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 14:06:50.552711    6084 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-406000" ...
	I0311 14:06:50.557922    6084 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:c9:a8:3d:fb:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/default-k8s-diff-port-406000/disk.qcow2
	I0311 14:06:50.567874    6084 main.go:141] libmachine: STDOUT: 
	I0311 14:06:50.567956    6084 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:06:50.568055    6084 fix.go:56] duration metric: took 25.591292ms for fixHost
	I0311 14:06:50.568076    6084 start.go:83] releasing machines lock for "default-k8s-diff-port-406000", held for 25.685333ms
	W0311 14:06:50.568276    6084 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-406000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-406000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:06:50.575681    6084 out.go:177] 
	W0311 14:06:50.578849    6084 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:06:50.578872    6084 out.go:239] * 
	* 
	W0311 14:06:50.581357    6084 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:06:50.590631    6084 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-406000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000: exit status 7 (72.050084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-406000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-440000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-440000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (5.188666917s)

                                                
                                                
-- stdout --
	* [newest-cni-440000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-440000" primary control-plane node in "newest-cni-440000" cluster
	* Restarting existing qemu2 VM for "newest-cni-440000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-440000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:06:49.556066    6119 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:06:49.556202    6119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:06:49.556206    6119 out.go:304] Setting ErrFile to fd 2...
	I0311 14:06:49.556208    6119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:06:49.556335    6119 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:06:49.557366    6119 out.go:298] Setting JSON to false
	I0311 14:06:49.573223    6119 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3980,"bootTime":1710187229,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 14:06:49.573286    6119 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 14:06:49.578461    6119 out.go:177] * [newest-cni-440000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 14:06:49.585461    6119 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 14:06:49.588396    6119 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 14:06:49.585516    6119 notify.go:220] Checking for updates...
	I0311 14:06:49.595379    6119 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 14:06:49.598465    6119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 14:06:49.601440    6119 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 14:06:49.604345    6119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 14:06:49.607856    6119 config.go:182] Loaded profile config "newest-cni-440000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0311 14:06:49.608106    6119 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 14:06:49.612428    6119 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 14:06:49.619361    6119 start.go:297] selected driver: qemu2
	I0311 14:06:49.619367    6119 start.go:901] validating driver "qemu2" against &{Name:newest-cni-440000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-440000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:06:49.619432    6119 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 14:06:49.621768    6119 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0311 14:06:49.621817    6119 cni.go:84] Creating CNI manager for ""
	I0311 14:06:49.621825    6119 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 14:06:49.621846    6119 start.go:340] cluster config:
	{Name:newest-cni-440000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-440000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 14:06:49.626148    6119 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 14:06:49.633440    6119 out.go:177] * Starting "newest-cni-440000" primary control-plane node in "newest-cni-440000" cluster
	I0311 14:06:49.638429    6119 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0311 14:06:49.638446    6119 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0311 14:06:49.638463    6119 cache.go:56] Caching tarball of preloaded images
	I0311 14:06:49.638521    6119 preload.go:173] Found /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 14:06:49.638527    6119 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0311 14:06:49.638596    6119 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/newest-cni-440000/config.json ...
	I0311 14:06:49.639099    6119 start.go:360] acquireMachinesLock for newest-cni-440000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:06:49.639126    6119 start.go:364] duration metric: took 20.459µs to acquireMachinesLock for "newest-cni-440000"
	I0311 14:06:49.639134    6119 start.go:96] Skipping create...Using existing machine configuration
	I0311 14:06:49.639143    6119 fix.go:54] fixHost starting: 
	I0311 14:06:49.639261    6119 fix.go:112] recreateIfNeeded on newest-cni-440000: state=Stopped err=<nil>
	W0311 14:06:49.639269    6119 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 14:06:49.643384    6119 out.go:177] * Restarting existing qemu2 VM for "newest-cni-440000" ...
	I0311 14:06:49.650472    6119 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:8f:0d:df:33:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/disk.qcow2
	I0311 14:06:49.652525    6119 main.go:141] libmachine: STDOUT: 
	I0311 14:06:49.652547    6119 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:06:49.652578    6119 fix.go:56] duration metric: took 13.437458ms for fixHost
	I0311 14:06:49.652583    6119 start.go:83] releasing machines lock for "newest-cni-440000", held for 13.454417ms
	W0311 14:06:49.652589    6119 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:06:49.652624    6119 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:06:49.652629    6119 start.go:728] Will try again in 5 seconds ...
	I0311 14:06:54.654680    6119 start.go:360] acquireMachinesLock for newest-cni-440000: {Name:mk7feebc3e050e7a53fc15adc6ae70f5e7b565c0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0311 14:06:54.655116    6119 start.go:364] duration metric: took 345.208µs to acquireMachinesLock for "newest-cni-440000"
	I0311 14:06:54.655252    6119 start.go:96] Skipping create...Using existing machine configuration
	I0311 14:06:54.655275    6119 fix.go:54] fixHost starting: 
	I0311 14:06:54.655991    6119 fix.go:112] recreateIfNeeded on newest-cni-440000: state=Stopped err=<nil>
	W0311 14:06:54.656017    6119 fix.go:138] unexpected machine state, will restart: <nil>
	I0311 14:06:54.659470    6119 out.go:177] * Restarting existing qemu2 VM for "newest-cni-440000" ...
	I0311 14:06:54.666652    6119 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:8f:0d:df:33:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18358-1220/.minikube/machines/newest-cni-440000/disk.qcow2
	I0311 14:06:54.677077    6119 main.go:141] libmachine: STDOUT: 
	I0311 14:06:54.677143    6119 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0311 14:06:54.677211    6119 fix.go:56] duration metric: took 21.940042ms for fixHost
	I0311 14:06:54.677232    6119 start.go:83] releasing machines lock for "newest-cni-440000", held for 22.091833ms
	W0311 14:06:54.677408    6119 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-440000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-440000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0311 14:06:54.686414    6119 out.go:177] 
	W0311 14:06:54.689612    6119 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0311 14:06:54.689644    6119 out.go:239] * 
	* 
	W0311 14:06:54.692353    6119 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 14:06:54.700323    6119 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-440000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-440000 -n newest-cni-440000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-440000 -n newest-cni-440000: exit status 7 (70.533167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-440000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-406000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000: exit status 7 (34.222625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-406000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-406000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-406000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-406000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.662125ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-406000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-406000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000: exit status 7 (32.152792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-406000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-406000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000: exit status 7 (31.075667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-406000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-406000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-406000 --alsologtostderr -v=1: exit status 83 (42.342125ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-406000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-406000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:06:50.876491    6138 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:06:50.876618    6138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:06:50.876622    6138 out.go:304] Setting ErrFile to fd 2...
	I0311 14:06:50.876624    6138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:06:50.876741    6138 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:06:50.876955    6138 out.go:298] Setting JSON to false
	I0311 14:06:50.876963    6138 mustload.go:65] Loading cluster: default-k8s-diff-port-406000
	I0311 14:06:50.877145    6138 config.go:182] Loaded profile config "default-k8s-diff-port-406000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 14:06:50.881549    6138 out.go:177] * The control-plane node default-k8s-diff-port-406000 host is not running: state=Stopped
	I0311 14:06:50.885569    6138 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-406000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-406000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000: exit status 7 (31.233417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-406000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000: exit status 7 (31.465625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-406000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-440000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-440000 -n newest-cni-440000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-440000 -n newest-cni-440000: exit status 7 (32.330958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-440000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-440000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-440000 --alsologtostderr -v=1: exit status 83 (42.049084ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-440000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-440000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 14:06:54.896456    6170 out.go:291] Setting OutFile to fd 1 ...
	I0311 14:06:54.896615    6170 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:06:54.896619    6170 out.go:304] Setting ErrFile to fd 2...
	I0311 14:06:54.896621    6170 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 14:06:54.896749    6170 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 14:06:54.896969    6170 out.go:298] Setting JSON to false
	I0311 14:06:54.896976    6170 mustload.go:65] Loading cluster: newest-cni-440000
	I0311 14:06:54.897179    6170 config.go:182] Loaded profile config "newest-cni-440000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0311 14:06:54.900113    6170 out.go:177] * The control-plane node newest-cni-440000 host is not running: state=Stopped
	I0311 14:06:54.904063    6170 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-440000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-440000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-440000 -n newest-cni-440000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-440000 -n newest-cni-440000: exit status 7 (32.372208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-440000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-440000 -n newest-cni-440000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-440000 -n newest-cni-440000: exit status 7 (32.184834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-440000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (160/281)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.22
12 TestDownloadOnly/v1.28.4/json-events 26.95
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.09
18 TestDownloadOnly/v1.28.4/DeleteAll 0.24
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.22
21 TestDownloadOnly/v1.29.0-rc.2/json-events 19
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.23
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.23
30 TestBinaryMirror 0.42
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 203.73
38 TestAddons/parallel/Registry 18.46
40 TestAddons/parallel/InspektorGadget 10.25
41 TestAddons/parallel/MetricsServer 5.25
44 TestAddons/parallel/CSI 51.77
45 TestAddons/parallel/Headlamp 12.52
46 TestAddons/parallel/CloudSpanner 5.17
47 TestAddons/parallel/LocalPath 54.81
48 TestAddons/parallel/NvidiaDevicePlugin 5.15
49 TestAddons/parallel/Yakd 5
52 TestAddons/serial/GCPAuth/Namespaces 0.07
53 TestAddons/StoppedEnableDisable 12.4
61 TestHyperKitDriverInstallOrUpdate 9.26
64 TestErrorSpam/setup 31.46
65 TestErrorSpam/start 0.35
66 TestErrorSpam/status 0.25
67 TestErrorSpam/pause 0.64
68 TestErrorSpam/unpause 0.61
69 TestErrorSpam/stop 64.27
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 77.31
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 35.1
76 TestFunctional/serial/KubeContext 0.03
77 TestFunctional/serial/KubectlGetPods 0.05
80 TestFunctional/serial/CacheCmd/cache/add_remote 9.51
81 TestFunctional/serial/CacheCmd/cache/add_local 1.22
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
83 TestFunctional/serial/CacheCmd/cache/list 0.04
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.13
86 TestFunctional/serial/CacheCmd/cache/delete 0.08
87 TestFunctional/serial/MinikubeKubectlCmd 0.53
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.65
89 TestFunctional/serial/ExtraConfig 32.65
90 TestFunctional/serial/ComponentHealth 0.04
91 TestFunctional/serial/LogsCmd 0.66
92 TestFunctional/serial/LogsFileCmd 0.61
93 TestFunctional/serial/InvalidService 4.2
95 TestFunctional/parallel/ConfigCmd 0.24
96 TestFunctional/parallel/DashboardCmd 6.73
97 TestFunctional/parallel/DryRun 0.24
98 TestFunctional/parallel/InternationalLanguage 0.12
99 TestFunctional/parallel/StatusCmd 0.25
104 TestFunctional/parallel/AddonsCmd 0.13
105 TestFunctional/parallel/PersistentVolumeClaim 24.16
107 TestFunctional/parallel/SSHCmd 0.13
108 TestFunctional/parallel/CpCmd 0.41
110 TestFunctional/parallel/FileSync 0.07
111 TestFunctional/parallel/CertSync 0.41
115 TestFunctional/parallel/NodeLabels 0.04
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.06
119 TestFunctional/parallel/License 1.35
120 TestFunctional/parallel/Version/short 0.04
121 TestFunctional/parallel/Version/components 0.18
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
126 TestFunctional/parallel/ImageCommands/ImageBuild 6.11
127 TestFunctional/parallel/ImageCommands/Setup 5.55
128 TestFunctional/parallel/DockerEnv/bash 0.38
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
132 TestFunctional/parallel/ServiceCmd/DeployApp 14.09
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.26
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.62
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.5
136 TestFunctional/parallel/ServiceCmd/List 0.09
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.11
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
139 TestFunctional/parallel/ServiceCmd/Format 0.1
140 TestFunctional/parallel/ServiceCmd/URL 0.1
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
143 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
144 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.17
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.61
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
150 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
151 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
152 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
153 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
154 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
155 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
156 TestFunctional/parallel/ProfileCmd/profile_not_create 0.19
157 TestFunctional/parallel/ProfileCmd/profile_list 0.15
158 TestFunctional/parallel/ProfileCmd/profile_json_output 0.15
159 TestFunctional/parallel/MountCmd/any-port 10.08
160 TestFunctional/parallel/MountCmd/specific-port 1.06
161 TestFunctional/parallel/MountCmd/VerifyCleanup 2.39
162 TestFunctional/delete_addon-resizer_images 0.11
163 TestFunctional/delete_my-image_image 0.04
164 TestFunctional/delete_minikube_cached_images 0.04
168 TestMutliControlPlane/serial/StartCluster 251.63
169 TestMutliControlPlane/serial/DeployApp 9.23
170 TestMutliControlPlane/serial/PingHostFromPods 0.81
171 TestMutliControlPlane/serial/AddWorkerNode 76.63
172 TestMutliControlPlane/serial/NodeLabels 0.12
173 TestMutliControlPlane/serial/HAppyAfterClusterStart 2.36
174 TestMutliControlPlane/serial/CopyFile 4.64
178 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart 80.62
186 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
193 TestJSONOutput/start/Audit 0
195 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/pause/Audit 0
201 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/unpause/Audit 0
207 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/stop/Command 3.44
211 TestJSONOutput/stop/Audit 0
213 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
215 TestErrorJSONOutput 0.33
220 TestMainNoArgs 0.04
265 TestStoppedBinaryUpgrade/Setup 4.96
267 TestStoppedBinaryUpgrade/MinikubeLogs 0.93
278 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
283 TestNoKubernetes/serial/VerifyK8sNotRunning 0.06
284 TestNoKubernetes/serial/ProfileList 0.18
285 TestNoKubernetes/serial/Stop 3.22
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
302 TestStartStop/group/old-k8s-version/serial/Stop 3.19
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
313 TestStartStop/group/no-preload/serial/Stop 2.14
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
324 TestStartStop/group/embed-certs/serial/Stop 2.11
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
337 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.61
338 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
340 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
342 TestStartStop/group/newest-cni/serial/Stop 3.71
343 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
349 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-006000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-006000: exit status 85 (98.837875ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-006000 | jenkins | v1.32.0 | 11 Mar 24 13:09 PDT |          |
	|         | -p download-only-006000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 13:09:36
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 13:09:36.858016    1654 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:09:36.858153    1654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:09:36.858156    1654 out.go:304] Setting ErrFile to fd 2...
	I0311 13:09:36.858159    1654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:09:36.858280    1654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	W0311 13:09:36.858389    1654 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18358-1220/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18358-1220/.minikube/config/config.json: no such file or directory
	I0311 13:09:36.859631    1654 out.go:298] Setting JSON to true
	I0311 13:09:36.876725    1654 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":547,"bootTime":1710187229,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 13:09:36.876795    1654 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 13:09:36.882542    1654 out.go:97] [download-only-006000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 13:09:36.885561    1654 out.go:169] MINIKUBE_LOCATION=18358
	I0311 13:09:36.882698    1654 notify.go:220] Checking for updates...
	W0311 13:09:36.882707    1654 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball: no such file or directory
	I0311 13:09:36.892504    1654 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:09:36.895545    1654 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 13:09:36.898586    1654 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:09:36.901562    1654 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	W0311 13:09:36.907605    1654 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0311 13:09:36.907850    1654 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:09:36.912509    1654 out.go:97] Using the qemu2 driver based on user configuration
	I0311 13:09:36.912530    1654 start.go:297] selected driver: qemu2
	I0311 13:09:36.912545    1654 start.go:901] validating driver "qemu2" against <nil>
	I0311 13:09:36.912606    1654 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 13:09:36.916529    1654 out.go:169] Automatically selected the socket_vmnet network
	I0311 13:09:36.922386    1654 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0311 13:09:36.922495    1654 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 13:09:36.922542    1654 cni.go:84] Creating CNI manager for ""
	I0311 13:09:36.922560    1654 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0311 13:09:36.922614    1654 start.go:340] cluster config:
	{Name:download-only-006000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-006000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:09:36.928395    1654 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:09:36.932510    1654 out.go:97] Downloading VM boot image ...
	I0311 13:09:36.932522    1654 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/iso/arm64/minikube-v1.32.1-1708638130-18020-arm64.iso
	I0311 13:09:55.125133    1654 out.go:97] Starting "download-only-006000" primary control-plane node in "download-only-006000" cluster
	I0311 13:09:55.125161    1654 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0311 13:09:55.430125    1654 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0311 13:09:55.430234    1654 cache.go:56] Caching tarball of preloaded images
	I0311 13:09:55.431020    1654 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0311 13:09:55.436517    1654 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0311 13:09:55.436544    1654 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0311 13:09:56.129220    1654 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0311 13:10:15.088387    1654 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0311 13:10:15.088551    1654 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0311 13:10:15.790088    1654 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0311 13:10:15.790293    1654 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/download-only-006000/config.json ...
	I0311 13:10:15.790309    1654 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/download-only-006000/config.json: {Name:mk7542d81dad174abfa1be338e75785717485840 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:10:15.790537    1654 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0311 13:10:15.790714    1654 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0311 13:10:16.504466    1654 out.go:169] 
	W0311 13:10:16.509367    1654 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18358-1220/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10876f1c0 0x10876f1c0 0x10876f1c0 0x10876f1c0 0x10876f1c0 0x10876f1c0 0x10876f1c0] Decompressors:map[bz2:0x14000a00290 gz:0x14000a00298 tar:0x14000a00240 tar.bz2:0x14000a00250 tar.gz:0x14000a00260 tar.xz:0x14000a00270 tar.zst:0x14000a00280 tbz2:0x14000a00250 tgz:0x14000a00260 txz:0x14000a00270 tzst:0x14000a00280 xz:0x14000a002a0 zip:0x14000a002b0 zst:0x14000a002a8] Getters:map[file:0x14000640d10 http:0x14000cb8690 https:0x14000cb86e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0311 13:10:16.509389    1654 out_reason.go:110] 
	W0311 13:10:16.517366    1654 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0311 13:10:16.521278    1654 out.go:169] 
	
	
	* The control-plane node download-only-006000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-006000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-006000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (26.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-707000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-707000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 : (26.95032125s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (26.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-707000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-707000: exit status 85 (89.981542ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-006000 | jenkins | v1.32.0 | 11 Mar 24 13:09 PDT |                     |
	|         | -p download-only-006000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 11 Mar 24 13:10 PDT | 11 Mar 24 13:10 PDT |
	| delete  | -p download-only-006000        | download-only-006000 | jenkins | v1.32.0 | 11 Mar 24 13:10 PDT | 11 Mar 24 13:10 PDT |
	| start   | -o=json --download-only        | download-only-707000 | jenkins | v1.32.0 | 11 Mar 24 13:10 PDT |                     |
	|         | -p download-only-707000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 13:10:17
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 13:10:17.191686    1692 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:10:17.191852    1692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:10:17.191855    1692 out.go:304] Setting ErrFile to fd 2...
	I0311 13:10:17.191857    1692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:10:17.192005    1692 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:10:17.193070    1692 out.go:298] Setting JSON to true
	I0311 13:10:17.209181    1692 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":588,"bootTime":1710187229,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 13:10:17.209242    1692 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 13:10:17.213059    1692 out.go:97] [download-only-707000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 13:10:17.217842    1692 out.go:169] MINIKUBE_LOCATION=18358
	I0311 13:10:17.213190    1692 notify.go:220] Checking for updates...
	I0311 13:10:17.223822    1692 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:10:17.226827    1692 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 13:10:17.229932    1692 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:10:17.231343    1692 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	W0311 13:10:17.237855    1692 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0311 13:10:17.238046    1692 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:10:17.240872    1692 out.go:97] Using the qemu2 driver based on user configuration
	I0311 13:10:17.240879    1692 start.go:297] selected driver: qemu2
	I0311 13:10:17.240882    1692 start.go:901] validating driver "qemu2" against <nil>
	I0311 13:10:17.240921    1692 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 13:10:17.243862    1692 out.go:169] Automatically selected the socket_vmnet network
	I0311 13:10:17.249053    1692 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0311 13:10:17.249141    1692 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 13:10:17.249167    1692 cni.go:84] Creating CNI manager for ""
	I0311 13:10:17.249174    1692 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 13:10:17.249180    1692 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 13:10:17.249221    1692 start.go:340] cluster config:
	{Name:download-only-707000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-707000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:10:17.253441    1692 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:10:17.256882    1692 out.go:97] Starting "download-only-707000" primary control-plane node in "download-only-707000" cluster
	I0311 13:10:17.256891    1692 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 13:10:17.917214    1692 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 13:10:17.917288    1692 cache.go:56] Caching tarball of preloaded images
	I0311 13:10:17.918030    1692 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 13:10:17.922631    1692 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0311 13:10:17.922655    1692 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0311 13:10:18.505418    1692 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4?checksum=md5:6fb922d1d9dc01a9d3c0b965ed219613 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0311 13:10:35.944745    1692 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0311 13:10:35.944901    1692 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0311 13:10:36.526608    1692 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0311 13:10:36.526798    1692 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/download-only-707000/config.json ...
	I0311 13:10:36.526814    1692 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/download-only-707000/config.json: {Name:mkbf60f45b9bd394944f9249c1e1aea49bb3145c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 13:10:36.527033    1692 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0311 13:10:36.527167    1692 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/darwin/arm64/v1.28.4/kubectl
	
	
	* The control-plane node download-only-707000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-707000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-707000
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-016000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-016000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 : (19.00278625s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (19.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-016000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-016000: exit status 85 (81.017709ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-006000 | jenkins | v1.32.0 | 11 Mar 24 13:09 PDT |                     |
	|         | -p download-only-006000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 11 Mar 24 13:10 PDT | 11 Mar 24 13:10 PDT |
	| delete  | -p download-only-006000           | download-only-006000 | jenkins | v1.32.0 | 11 Mar 24 13:10 PDT | 11 Mar 24 13:10 PDT |
	| start   | -o=json --download-only           | download-only-707000 | jenkins | v1.32.0 | 11 Mar 24 13:10 PDT |                     |
	|         | -p download-only-707000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 11 Mar 24 13:10 PDT | 11 Mar 24 13:10 PDT |
	| delete  | -p download-only-707000           | download-only-707000 | jenkins | v1.32.0 | 11 Mar 24 13:10 PDT | 11 Mar 24 13:10 PDT |
	| start   | -o=json --download-only           | download-only-016000 | jenkins | v1.32.0 | 11 Mar 24 13:10 PDT |                     |
	|         | -p download-only-016000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 13:10:44
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 13:10:44.694256    1740 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:10:44.694365    1740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:10:44.694368    1740 out.go:304] Setting ErrFile to fd 2...
	I0311 13:10:44.694370    1740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:10:44.694489    1740 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:10:44.695539    1740 out.go:298] Setting JSON to true
	I0311 13:10:44.711445    1740 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":615,"bootTime":1710187229,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 13:10:44.711499    1740 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 13:10:44.714949    1740 out.go:97] [download-only-016000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 13:10:44.718961    1740 out.go:169] MINIKUBE_LOCATION=18358
	I0311 13:10:44.715023    1740 notify.go:220] Checking for updates...
	I0311 13:10:44.726909    1740 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:10:44.729947    1740 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 13:10:44.733009    1740 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:10:44.735932    1740 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	W0311 13:10:44.741955    1740 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0311 13:10:44.742107    1740 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:10:44.744951    1740 out.go:97] Using the qemu2 driver based on user configuration
	I0311 13:10:44.744963    1740 start.go:297] selected driver: qemu2
	I0311 13:10:44.744967    1740 start.go:901] validating driver "qemu2" against <nil>
	I0311 13:10:44.745014    1740 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 13:10:44.747915    1740 out.go:169] Automatically selected the socket_vmnet network
	I0311 13:10:44.753072    1740 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0311 13:10:44.753156    1740 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 13:10:44.753179    1740 cni.go:84] Creating CNI manager for ""
	I0311 13:10:44.753187    1740 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0311 13:10:44.753195    1740 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0311 13:10:44.753227    1740 start.go:340] cluster config:
	{Name:download-only-016000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-016000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:10:44.757536    1740 iso.go:125] acquiring lock: {Name:mkab3e5627f79807687d99b310b85b18adb65b6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 13:10:44.761921    1740 out.go:97] Starting "download-only-016000" primary control-plane node in "download-only-016000" cluster
	I0311 13:10:44.761932    1740 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0311 13:10:45.429278    1740 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0311 13:10:45.429373    1740 cache.go:56] Caching tarball of preloaded images
	I0311 13:10:45.430116    1740 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0311 13:10:45.434820    1740 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0311 13:10:45.434858    1740 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0311 13:10:46.019817    1740 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4?checksum=md5:ec278d0a65e5e64ee0e67f51e14b1867 -> /Users/jenkins/minikube-integration/18358-1220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-016000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-016000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-016000
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.42s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-961000 --alsologtostderr --binary-mirror http://127.0.0.1:49328 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-961000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-961000
--- PASS: TestBinaryMirror (0.42s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-212000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-212000: exit status 85 (59.410209ms)

                                                
                                                
-- stdout --
	* Profile "addons-212000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-212000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-212000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-212000: exit status 85 (55.539ms)

                                                
                                                
-- stdout --
	* Profile "addons-212000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-212000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (203.73s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-212000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-darwin-arm64 start -p addons-212000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m23.734161459s)
--- PASS: TestAddons/Setup (203.73s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 8.45925ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-dwfq8" [25875c8f-9c9e-472a-8003-09b286771012] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00436725s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-j7scl" [39ea9ca9-f9f5-46f0-a2c9-6b2d012f26db] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004450208s
addons_test.go:340: (dbg) Run:  kubectl --context addons-212000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-212000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-212000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.005749125s)
addons_test.go:359: (dbg) Run:  out/minikube-darwin-arm64 -p addons-212000 ip
2024/03/11 13:14:47 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-darwin-arm64 -p addons-212000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.46s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-87h2x" [434f6908-79f2-4e84-a85f-eb109e898357] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004841334s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-212000
addons_test.go:841: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-212000: (5.241358208s)
--- PASS: TestAddons/parallel/InspektorGadget (10.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 6.816917ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-jrn44" [88601ae3-d859-4f45-a68e-2b495200f659] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004055209s
addons_test.go:415: (dbg) Run:  kubectl --context addons-212000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-arm64 -p addons-212000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 8.81375ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-212000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-212000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c83d6dc4-1c7b-4450-b48c-35ac9430fbd5] Pending
helpers_test.go:344: "task-pv-pod" [c83d6dc4-1c7b-4450-b48c-35ac9430fbd5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c83d6dc4-1c7b-4450-b48c-35ac9430fbd5] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.00403475s
addons_test.go:584: (dbg) Run:  kubectl --context addons-212000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-212000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-212000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-212000 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-212000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-212000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-212000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a298f018-954f-4a35-bca1-9505b9f42178] Pending
helpers_test.go:344: "task-pv-pod-restore" [a298f018-954f-4a35-bca1-9505b9f42178] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a298f018-954f-4a35-bca1-9505b9f42178] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004148125s
addons_test.go:626: (dbg) Run:  kubectl --context addons-212000 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-212000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-212000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-arm64 -p addons-212000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-arm64 -p addons-212000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.094999208s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-arm64 -p addons-212000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (51.77s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-212000 --alsologtostderr -v=1
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-q785p" [7d7070ef-6f6d-42fe-a5ca-e419a296b55e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-q785p" [7d7070ef-6f6d-42fe-a5ca-e419a296b55e] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.0027735s
--- PASS: TestAddons/parallel/Headlamp (12.52s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-84hrc" [bb6e17a5-604b-46f0-8dda-80130298422e] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003898125s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-212000
--- PASS: TestAddons/parallel/CloudSpanner (5.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.81s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-212000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-212000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-212000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a05de413-867f-426f-a796-07e67403cf09] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a05de413-867f-426f-a796-07e67403cf09] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a05de413-867f-426f-a796-07e67403cf09] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00414575s
addons_test.go:891: (dbg) Run:  kubectl --context addons-212000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-arm64 -p addons-212000 ssh "cat /opt/local-path-provisioner/pvc-96ca7c81-3edb-43c3-9d40-7db83042191a_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-212000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-212000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-arm64 -p addons-212000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-arm64 -p addons-212000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.325217083s)
--- PASS: TestAddons/parallel/LocalPath (54.81s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-jr55d" [4cf28a72-abc4-4e97-b74c-a9745af1ee62] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005993916s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-212000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-k84vk" [ef7ef197-5300-4f06-851b-9c74c2c808f7] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0041555s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-212000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-212000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-212000
addons_test.go:172: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-212000: (12.203227916s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-212000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-212000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-212000
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.26s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.26s)

                                                
                                    
x
+
TestErrorSpam/setup (31.46s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-170000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-170000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-170000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-170000 --driver=qemu2 : (31.4581085s)
--- PASS: TestErrorSpam/setup (31.46s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-170000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-170000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-170000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-170000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-170000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-170000 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-170000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-170000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-170000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-170000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-170000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-170000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-170000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-170000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-170000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-170000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-170000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-170000 pause
--- PASS: TestErrorSpam/pause (0.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-170000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-170000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-170000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-170000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-170000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-170000 unpause
--- PASS: TestErrorSpam/unpause (0.61s)

                                                
                                    
x
+
TestErrorSpam/stop (64.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-170000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-170000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-170000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-170000 stop: (12.203460875s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-170000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-170000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-170000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-170000 stop: (26.030405458s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-170000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-170000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-170000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-170000 stop: (26.034831542s)
--- PASS: TestErrorSpam/stop (64.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18358-1220/.minikube/files/etc/test/nested/copy/1652/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.31s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-503000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0311 13:19:29.094275    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
E0311 13:19:29.101028    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
E0311 13:19:29.113103    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
E0311 13:19:29.135165    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
E0311 13:19:29.177223    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
E0311 13:19:29.259276    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
E0311 13:19:29.421353    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
E0311 13:19:29.743424    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
E0311 13:19:30.385517    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
E0311 13:19:31.667600    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
E0311 13:19:34.229668    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
E0311 13:19:39.351679    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-503000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (1m17.310697041s)
--- PASS: TestFunctional/serial/StartWithProxy (77.31s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.1s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-503000 --alsologtostderr -v=8
E0311 13:19:49.593575    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
E0311 13:20:10.073901    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-503000 --alsologtostderr -v=8: (35.094961584s)
functional_test.go:659: soft start took 35.095329417s for "functional-503000" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.10s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-503000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-503000 cache add registry.k8s.io/pause:3.1: (3.9397225s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-503000 cache add registry.k8s.io/pause:3.3: (3.275172917s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-503000 cache add registry.k8s.io/pause:latest: (2.299482667s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-503000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local875704754/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 cache add minikube-local-cache-test:functional-503000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 cache delete minikube-local-cache-test:functional-503000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-503000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-503000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (72.842917ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-arm64 -p functional-503000 cache reload: (1.899609458s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 kubectl -- --context functional-503000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-503000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.65s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.65s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-503000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0311 13:20:51.034987    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-503000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.648346167s)
functional_test.go:757: restart took 32.648457959s for "functional-503000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.65s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-503000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd2558828614/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.61s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.2s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-503000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-503000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-503000: exit status 115 (109.690875ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30874 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-503000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.20s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-503000 config get cpus: exit status 14 (34.816458ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-503000 config get cpus: exit status 14 (36.163167ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-503000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-503000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2646: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.73s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-503000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-503000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (125.99925ms)

                                                
                                                
-- stdout --
	* [functional-503000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:22:09.364176    2629 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:22:09.364298    2629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:22:09.364301    2629 out.go:304] Setting ErrFile to fd 2...
	I0311 13:22:09.364303    2629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:22:09.364421    2629 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:22:09.365499    2629 out.go:298] Setting JSON to false
	I0311 13:22:09.382722    2629 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1300,"bootTime":1710187229,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 13:22:09.382782    2629 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 13:22:09.388629    2629 out.go:177] * [functional-503000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0311 13:22:09.396575    2629 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 13:22:09.396658    2629 notify.go:220] Checking for updates...
	I0311 13:22:09.399589    2629 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:22:09.403476    2629 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 13:22:09.406559    2629 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:22:09.413543    2629 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 13:22:09.420532    2629 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 13:22:09.423802    2629 config.go:182] Loaded profile config "functional-503000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:22:09.424058    2629 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:22:09.428516    2629 out.go:177] * Using the qemu2 driver based on existing profile
	I0311 13:22:09.434549    2629 start.go:297] selected driver: qemu2
	I0311 13:22:09.434557    2629 start.go:901] validating driver "qemu2" against &{Name:functional-503000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-503000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:22:09.434618    2629 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 13:22:09.441586    2629 out.go:177] 
	W0311 13:22:09.445562    2629 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0311 13:22:09.449547    2629 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-503000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-503000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-503000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (116.7815ms)

                                                
                                                
-- stdout --
	* [functional-503000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 13:22:09.598000    2640 out.go:291] Setting OutFile to fd 1 ...
	I0311 13:22:09.598110    2640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:22:09.598115    2640 out.go:304] Setting ErrFile to fd 2...
	I0311 13:22:09.598117    2640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 13:22:09.598241    2640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
	I0311 13:22:09.599678    2640 out.go:298] Setting JSON to false
	I0311 13:22:09.617001    2640 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1300,"bootTime":1710187229,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0311 13:22:09.617105    2640 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0311 13:22:09.621578    2640 out.go:177] * [functional-503000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	I0311 13:22:09.628548    2640 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 13:22:09.632551    2640 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	I0311 13:22:09.628579    2640 notify.go:220] Checking for updates...
	I0311 13:22:09.638572    2640 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0311 13:22:09.641614    2640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 13:22:09.643005    2640 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	I0311 13:22:09.646541    2640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 13:22:09.649886    2640 config.go:182] Loaded profile config "functional-503000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0311 13:22:09.650122    2640 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 13:22:09.654417    2640 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0311 13:22:09.661573    2640 start.go:297] selected driver: qemu2
	I0311 13:22:09.661579    2640 start.go:901] validating driver "qemu2" against &{Name:functional-503000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-503000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 13:22:09.661627    2640 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 13:22:09.667489    2640 out.go:177] 
	W0311 13:22:09.675549    2640 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0311 13:22:09.679560    2640 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [88d5e356-c36d-43b8-a9af-a3cb7707c322] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004074208s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-503000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-503000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-503000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-503000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [90a38da9-5cb4-487f-877c-a75e9a6579df] Pending
helpers_test.go:344: "sp-pod" [90a38da9-5cb4-487f-877c-a75e9a6579df] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [90a38da9-5cb4-487f-877c-a75e9a6579df] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004403041s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-503000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-503000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-503000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [02d7b708-dd18-40f3-be2c-cf4133603707] Pending
helpers_test.go:344: "sp-pod" [02d7b708-dd18-40f3-be2c-cf4133603707] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [02d7b708-dd18-40f3-be2c-cf4133603707] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004010833s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-503000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh -n functional-503000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 cp functional-503000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd414665559/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh -n functional-503000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh -n functional-503000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1652/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh "sudo cat /etc/test/nested/copy/1652/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1652.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh "sudo cat /etc/ssl/certs/1652.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1652.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh "sudo cat /usr/share/ca-certificates/1652.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/16522.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh "sudo cat /etc/ssl/certs/16522.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/16522.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh "sudo cat /usr/share/ca-certificates/16522.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-503000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-503000 ssh "sudo systemctl is-active crio": exit status 1 (60.806958ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-arm64 license: (1.3453435s)
--- PASS: TestFunctional/parallel/License (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-503000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-503000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-503000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-503000 image ls --format short --alsologtostderr:
I0311 13:22:12.230751    2668 out.go:291] Setting OutFile to fd 1 ...
I0311 13:22:12.231177    2668 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:22:12.231182    2668 out.go:304] Setting ErrFile to fd 2...
I0311 13:22:12.231184    2668 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:22:12.231325    2668 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
I0311 13:22:12.231749    2668 config.go:182] Loaded profile config "functional-503000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 13:22:12.231808    2668 config.go:182] Loaded profile config "functional-503000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 13:22:12.232776    2668 ssh_runner.go:195] Run: systemctl --version
I0311 13:22:12.232785    2668 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/functional-503000/id_rsa Username:docker}
I0311 13:22:12.259148    2668 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-503000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.28.4           | 04b4c447bb9d4 | 120MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/library/minikube-local-cache-test | functional-503000 | dd014a0282e23 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.28.4           | 05c284c929889 | 57.8MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| gcr.io/google-containers/addon-resizer      | functional-503000 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/nginx                     | alpine            | be5e6f23a9904 | 43.6MB |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 3ca3ca488cf13 | 68.4MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | 9961cbceaf234 | 116MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | latest            | 760b7cbba31e1 | 192MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-503000 image ls --format table --alsologtostderr:
I0311 13:22:12.470967    2674 out.go:291] Setting OutFile to fd 1 ...
I0311 13:22:12.471119    2674 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:22:12.471123    2674 out.go:304] Setting ErrFile to fd 2...
I0311 13:22:12.471126    2674 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:22:12.471249    2674 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
I0311 13:22:12.471682    2674 config.go:182] Loaded profile config "functional-503000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 13:22:12.471738    2674 config.go:182] Loaded profile config "functional-503000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 13:22:12.472674    2674 ssh_runner.go:195] Run: systemctl --version
I0311 13:22:12.472688    2674 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/functional-503000/id_rsa Username:docker}
I0311 13:22:12.502210    2674 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-503000 image ls --format json --alsologtostderr:
[{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-503000"],"size":"32900000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"120000000"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"116000000"},{"id":"829e9de338bd5fdd3f1
6f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"57800000"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"68400000"},{"id":"be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43600000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["r
egistry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"dd014a0282e23253ddcd057a5fd4dda467bf71a9f678a9371d521036dfca5596","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-503000"],"size":"30"},{"id":"760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-503000 image ls --format json --alsologtostderr:
I0311 13:22:12.386676    2672 out.go:291] Setting OutFile to fd 1 ...
I0311 13:22:12.386830    2672 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:22:12.386834    2672 out.go:304] Setting ErrFile to fd 2...
I0311 13:22:12.386836    2672 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:22:12.386959    2672 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
I0311 13:22:12.387385    2672 config.go:182] Loaded profile config "functional-503000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 13:22:12.387440    2672 config.go:182] Loaded profile config "functional-503000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 13:22:12.388332    2672 ssh_runner.go:195] Run: systemctl --version
I0311 13:22:12.388341    2672 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/functional-503000/id_rsa Username:docker}
I0311 13:22:12.418067    2672 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-503000 image ls --format yaml --alsologtostderr:
- id: dd014a0282e23253ddcd057a5fd4dda467bf71a9f678a9371d521036dfca5596
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-503000
size: "30"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "116000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-503000
size: "32900000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "68400000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43600000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "120000000"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "57800000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-503000 image ls --format yaml --alsologtostderr:
I0311 13:22:12.310292    2670 out.go:291] Setting OutFile to fd 1 ...
I0311 13:22:12.310426    2670 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:22:12.310430    2670 out.go:304] Setting ErrFile to fd 2...
I0311 13:22:12.310433    2670 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:22:12.310566    2670 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
I0311 13:22:12.310998    2670 config.go:182] Loaded profile config "functional-503000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 13:22:12.311056    2670 config.go:182] Loaded profile config "functional-503000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 13:22:12.311992    2670 ssh_runner.go:195] Run: systemctl --version
I0311 13:22:12.312002    2670 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/functional-503000/id_rsa Username:docker}
I0311 13:22:12.337819    2670 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-503000 ssh pgrep buildkitd: exit status 1 (62.936416ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 image build -t localhost/my-image:functional-503000 testdata/build --alsologtostderr
E0311 13:22:12.954918    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
2024/03/11 13:22:16 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-503000 image build -t localhost/my-image:functional-503000 testdata/build --alsologtostderr: (5.976605209s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-503000 image build -t localhost/my-image:functional-503000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in cecdb94c0976
Removing intermediate container cecdb94c0976
---> a81158451f3d
Step 3/3 : ADD content.txt /
---> ceb6d346f361
Successfully built ceb6d346f361
Successfully tagged localhost/my-image:functional-503000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-503000 image build -t localhost/my-image:functional-503000 testdata/build --alsologtostderr:
I0311 13:22:12.616661    2678 out.go:291] Setting OutFile to fd 1 ...
I0311 13:22:12.616863    2678 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:22:12.616866    2678 out.go:304] Setting ErrFile to fd 2...
I0311 13:22:12.616869    2678 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 13:22:12.616993    2678 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18358-1220/.minikube/bin
I0311 13:22:12.617405    2678 config.go:182] Loaded profile config "functional-503000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 13:22:12.618184    2678 config.go:182] Loaded profile config "functional-503000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0311 13:22:12.619115    2678 ssh_runner.go:195] Run: systemctl --version
I0311 13:22:12.619125    2678 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18358-1220/.minikube/machines/functional-503000/id_rsa Username:docker}
I0311 13:22:12.643755    2678 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3568815888.tar
I0311 13:22:12.643846    2678 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0311 13:22:12.649703    2678 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3568815888.tar
I0311 13:22:12.651501    2678 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3568815888.tar: stat -c "%s %y" /var/lib/minikube/build/build.3568815888.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3568815888.tar': No such file or directory
I0311 13:22:12.651517    2678 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3568815888.tar --> /var/lib/minikube/build/build.3568815888.tar (3072 bytes)
I0311 13:22:12.662850    2678 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3568815888
I0311 13:22:12.667835    2678 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3568815888 -xf /var/lib/minikube/build/build.3568815888.tar
I0311 13:22:12.675576    2678 docker.go:360] Building image: /var/lib/minikube/build/build.3568815888
I0311 13:22:12.675626    2678 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-503000 /var/lib/minikube/build/build.3568815888
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0311 13:22:18.547265    2678 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-503000 /var/lib/minikube/build/build.3568815888: (5.871784542s)
I0311 13:22:18.547340    2678 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3568815888
I0311 13:22:18.550817    2678 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3568815888.tar
I0311 13:22:18.554057    2678 build_images.go:217] Built localhost/my-image:functional-503000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3568815888.tar
I0311 13:22:18.554072    2678 build_images.go:133] succeeded building to: functional-503000
I0311 13:22:18.554077    2678 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.509759417s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-503000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.55s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-503000 docker-env) && out/minikube-darwin-arm64 status -p functional-503000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-503000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (14.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-503000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-503000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-rwnn9" [9b980977-07dd-4758-8d29-b15632e9f334] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-rwnn9" [9b980977-07dd-4758-8d29-b15632e9f334] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 14.003921292s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (14.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 image load --daemon gcr.io/google-containers/addon-resizer:functional-503000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-503000 image load --daemon gcr.io/google-containers/addon-resizer:functional-503000 --alsologtostderr: (2.187886583s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 image load --daemon gcr.io/google-containers/addon-resizer:functional-503000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-503000 image load --daemon gcr.io/google-containers/addon-resizer:functional-503000 --alsologtostderr: (1.545619625s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.447428792s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-503000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 image load --daemon gcr.io/google-containers/addon-resizer:functional-503000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-503000 image load --daemon gcr.io/google-containers/addon-resizer:functional-503000 --alsologtostderr: (1.9436575s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 service list -o json
functional_test.go:1490: Took "107.776458ms" to run "out/minikube-darwin-arm64 -p functional-503000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:32272
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:32272
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-503000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-503000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-503000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-503000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2468: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 image save gcr.io/google-containers/addon-resizer:functional-503000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-503000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-503000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ddcad1a8-e2a9-4b73-9b4b-46838e1a84e1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ddcad1a8-e2a9-4b73-9b4b-46838e1a84e1] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003865959s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 image rm gcr.io/google-containers/addon-resizer:functional-503000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-503000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 image save --daemon gcr.io/google-containers/addon-resizer:functional-503000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-503000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-503000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.223.246 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-503000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "108.847291ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "37.759292ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "109.939333ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "38.687083ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-503000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2889974724/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710188515806261000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2889974724/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710188515806261000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2889974724/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710188515806261000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2889974724/001/test-1710188515806261000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Done: out/minikube-darwin-arm64 -p functional-503000 ssh "findmnt -T /mount-9p | grep 9p": (1.511933333s)
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 11 20:21 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 11 20:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 11 20:21 test-1710188515806261000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh cat /mount-9p/test-1710188515806261000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-503000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d] Pending
helpers_test.go:344: "busybox-mount" [ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ae1baf6e-b708-4c6e-91f5-ee8cd9fa6b6d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.00394175s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-503000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-503000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2889974724/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.08s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-503000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3691847349/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-503000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (65.898083ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-503000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3691847349/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-503000 ssh "sudo umount -f /mount-9p": exit status 1 (61.528333ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-503000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-503000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3691847349/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-503000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup287654964/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-503000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup287654964/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-503000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup287654964/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-503000 ssh "findmnt -T" /mount1: exit status 1 (70.511625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-503000 ssh "findmnt -T" /mount1: exit status 1 (59.043541ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-503000 ssh "findmnt -T" /mount1: exit status 1 (61.456709ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-503000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-503000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-503000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup287654964/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-503000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup287654964/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-503000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup287654964/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.39s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-503000
--- PASS: TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-503000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-503000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (251.63s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-674000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0311 13:24:29.085960    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
E0311 13:24:56.792537    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/addons-212000/client.crt: no such file or directory
E0311 13:26:14.209448    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
E0311 13:26:14.215810    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
E0311 13:26:14.227889    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
E0311 13:26:14.249953    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
E0311 13:26:14.292082    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
E0311 13:26:14.373765    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
E0311 13:26:14.534605    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
E0311 13:26:14.856707    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
E0311 13:26:15.498168    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
E0311 13:26:16.779549    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
E0311 13:26:19.341636    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
E0311 13:26:24.463672    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-674000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (4m11.444061084s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/StartCluster (251.63s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (9.23s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-674000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-674000 -- rollout status deployment/busybox
E0311 13:26:34.705584    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-674000 -- rollout status deployment/busybox: (7.647746625s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-674000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-674000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-674000 -- exec busybox-5b5d89c9d6-85gkm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-674000 -- exec busybox-5b5d89c9d6-nw7bq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-674000 -- exec busybox-5b5d89c9d6-s8r7f -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-674000 -- exec busybox-5b5d89c9d6-85gkm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-674000 -- exec busybox-5b5d89c9d6-nw7bq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-674000 -- exec busybox-5b5d89c9d6-s8r7f -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-674000 -- exec busybox-5b5d89c9d6-85gkm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-674000 -- exec busybox-5b5d89c9d6-nw7bq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-674000 -- exec busybox-5b5d89c9d6-s8r7f -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMutliControlPlane/serial/DeployApp (9.23s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (0.81s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-674000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-674000 -- exec busybox-5b5d89c9d6-85gkm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-674000 -- exec busybox-5b5d89c9d6-85gkm -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-674000 -- exec busybox-5b5d89c9d6-nw7bq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-674000 -- exec busybox-5b5d89c9d6-nw7bq -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-674000 -- exec busybox-5b5d89c9d6-s8r7f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-674000 -- exec busybox-5b5d89c9d6-s8r7f -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMutliControlPlane/serial/PingHostFromPods (0.81s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (76.63s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-674000 -v=7 --alsologtostderr
E0311 13:26:55.185983    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
E0311 13:27:36.146679    1652 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18358-1220/.minikube/profiles/functional-503000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-674000 -v=7 --alsologtostderr: (1m16.392350459s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/AddWorkerNode (76.63s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-674000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMutliControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (2.36s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2.358558541s)
--- PASS: TestMutliControlPlane/serial/HAppyAfterClusterStart (2.36s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (4.64s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 cp testdata/cp-test.txt ha-674000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 cp ha-674000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMutliControlPlaneserialCopyFile16860689/001/cp-test_ha-674000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 cp ha-674000:/home/docker/cp-test.txt ha-674000-m02:/home/docker/cp-test_ha-674000_ha-674000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m02 "sudo cat /home/docker/cp-test_ha-674000_ha-674000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 cp ha-674000:/home/docker/cp-test.txt ha-674000-m03:/home/docker/cp-test_ha-674000_ha-674000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m03 "sudo cat /home/docker/cp-test_ha-674000_ha-674000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 cp ha-674000:/home/docker/cp-test.txt ha-674000-m04:/home/docker/cp-test_ha-674000_ha-674000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m04 "sudo cat /home/docker/cp-test_ha-674000_ha-674000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 cp testdata/cp-test.txt ha-674000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 cp ha-674000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMutliControlPlaneserialCopyFile16860689/001/cp-test_ha-674000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 cp ha-674000-m02:/home/docker/cp-test.txt ha-674000:/home/docker/cp-test_ha-674000-m02_ha-674000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000 "sudo cat /home/docker/cp-test_ha-674000-m02_ha-674000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 cp ha-674000-m02:/home/docker/cp-test.txt ha-674000-m03:/home/docker/cp-test_ha-674000-m02_ha-674000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m03 "sudo cat /home/docker/cp-test_ha-674000-m02_ha-674000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 cp ha-674000-m02:/home/docker/cp-test.txt ha-674000-m04:/home/docker/cp-test_ha-674000-m02_ha-674000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m04 "sudo cat /home/docker/cp-test_ha-674000-m02_ha-674000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 cp testdata/cp-test.txt ha-674000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 cp ha-674000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMutliControlPlaneserialCopyFile16860689/001/cp-test_ha-674000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 cp ha-674000-m03:/home/docker/cp-test.txt ha-674000:/home/docker/cp-test_ha-674000-m03_ha-674000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000 "sudo cat /home/docker/cp-test_ha-674000-m03_ha-674000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 cp ha-674000-m03:/home/docker/cp-test.txt ha-674000-m02:/home/docker/cp-test_ha-674000-m03_ha-674000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m02 "sudo cat /home/docker/cp-test_ha-674000-m03_ha-674000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 cp ha-674000-m03:/home/docker/cp-test.txt ha-674000-m04:/home/docker/cp-test_ha-674000-m03_ha-674000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m04 "sudo cat /home/docker/cp-test_ha-674000-m03_ha-674000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 cp testdata/cp-test.txt ha-674000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 cp ha-674000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMutliControlPlaneserialCopyFile16860689/001/cp-test_ha-674000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 cp ha-674000-m04:/home/docker/cp-test.txt ha-674000:/home/docker/cp-test_ha-674000-m04_ha-674000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000 "sudo cat /home/docker/cp-test_ha-674000-m04_ha-674000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 cp ha-674000-m04:/home/docker/cp-test.txt ha-674000-m02:/home/docker/cp-test_ha-674000-m04_ha-674000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m02 "sudo cat /home/docker/cp-test_ha-674000-m04_ha-674000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 cp ha-674000-m04:/home/docker/cp-test.txt ha-674000-m03:/home/docker/cp-test_ha-674000-m04_ha-674000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-674000 ssh -n ha-674000-m03 "sudo cat /home/docker/cp-test_ha-674000-m04_ha-674000-m03.txt"
--- PASS: TestMutliControlPlane/serial/CopyFile (4.64s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (80.62s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m20.618070917s)
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (80.62s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.44s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-825000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-825000 --output=json --user=testUser: (3.439189125s)
--- PASS: TestJSONOutput/stop/Command (3.44s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-098000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-098000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (99.905ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9599abb6-88db-48b4-9a17-df0c3fbcd1a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-098000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b51d0dd0-ff90-4397-8b7a-4f9f59285e98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18358"}}
	{"specversion":"1.0","id":"68abd55d-b831-4707-be0e-10eabfdd3d3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig"}}
	{"specversion":"1.0","id":"87a4417e-10f5-47a3-b9f2-58a58da1ebcf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"e507c466-ff97-4ac3-a848-c530a92fa38b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dfa93997-d080-46cc-bc01-a113ed25264f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube"}}
	{"specversion":"1.0","id":"0e8c8b46-193c-4b54-a064-f2c5f3b5744a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"af274bdb-3d8d-4a4f-8f02-deb458df3c6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-098000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-098000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-517000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-371000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-371000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (99.877542ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-371000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18358-1220/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-371000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-371000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (59.152417ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-371000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-371000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-371000
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18358
- KUBECONFIG=/Users/jenkins/minikube-integration/18358-1220/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3345367998/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-371000: (3.216970042s)
--- PASS: TestNoKubernetes/serial/Stop (3.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-371000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-371000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.730875ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-371000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-371000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-930000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-930000 --alsologtostderr -v=3: (3.189643292s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-930000 -n old-k8s-version-930000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-930000 -n old-k8s-version-930000: exit status 7 (59.662041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-930000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-360000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-360000 --alsologtostderr -v=3: (2.144737709s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-360000 -n no-preload-360000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-360000 -n no-preload-360000: exit status 7 (58.830417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-360000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-026000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-026000 --alsologtostderr -v=3: (2.109873875s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-026000 -n embed-certs-026000: exit status 7 (58.74275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-026000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-406000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-406000 --alsologtostderr -v=3: (3.611483667s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-406000 -n default-k8s-diff-port-406000: exit status 7 (55.440667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-406000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-440000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-440000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-440000 --alsologtostderr -v=3: (3.711159s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-440000 -n newest-cni-440000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-440000 -n newest-cni-440000: exit status 7 (58.292041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-440000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/281)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-425000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-425000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-425000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-425000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-425000"

                                                
                                                
----------------------- debugLogs end: cilium-425000 [took: 2.252142416s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-425000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-425000
--- SKIP: TestNetworkPlugins/group/cilium (2.48s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-873000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-873000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard