Test Report: QEMU_macOS 19307

                    
                      5a24b9ce483ba531c92412d298617e78cc9898c8:2024-07-19:35418
                    
                

Test fail (97/282)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.12
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 9.91
55 TestCertOptions 10.21
56 TestCertExpiration 195.28
57 TestDockerFlags 10.19
58 TestForceSystemdFlag 10.04
59 TestForceSystemdEnv 10.73
104 TestFunctional/parallel/ServiceCmdConnect 31.88
176 TestMultiControlPlane/serial/StopSecondaryNode 214.11
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 103.89
178 TestMultiControlPlane/serial/RestartSecondaryNode 209.02
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 283.49
181 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.02
183 TestMultiControlPlane/serial/StopCluster 251.16
184 TestMultiControlPlane/serial/RestartCluster 5.25
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
186 TestMultiControlPlane/serial/AddSecondaryNode 0.07
190 TestImageBuild/serial/Setup 9.93
193 TestJSONOutput/start/Command 9.88
199 TestJSONOutput/pause/Command 0.07
205 TestJSONOutput/unpause/Command 0.04
222 TestMinikubeProfile 10.14
225 TestMountStart/serial/StartWithMountFirst 10.05
228 TestMultiNode/serial/FreshStart2Nodes 9.82
229 TestMultiNode/serial/DeployApp2Nodes 93.69
230 TestMultiNode/serial/PingHostFrom2Pods 0.08
231 TestMultiNode/serial/AddNode 0.07
232 TestMultiNode/serial/MultiNodeLabels 0.06
233 TestMultiNode/serial/ProfileList 0.07
234 TestMultiNode/serial/CopyFile 0.06
235 TestMultiNode/serial/StopNode 0.13
236 TestMultiNode/serial/StartAfterStop 59.7
237 TestMultiNode/serial/RestartKeepsNodes 9.13
238 TestMultiNode/serial/DeleteNode 0.1
239 TestMultiNode/serial/StopMultiNode 3.34
240 TestMultiNode/serial/RestartMultiNode 5.26
241 TestMultiNode/serial/ValidateNameConflict 19.87
245 TestPreload 10.11
247 TestScheduledStopUnix 9.94
248 TestSkaffold 12.16
251 TestRunningBinaryUpgrade 690.04
253 TestKubernetesUpgrade 18.22
266 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.74
267 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.29
269 TestStoppedBinaryUpgrade/Upgrade 574.45
271 TestPause/serial/Start 9.9
281 TestNoKubernetes/serial/StartWithK8s 9.83
282 TestNoKubernetes/serial/StartWithStopK8s 5.3
283 TestNoKubernetes/serial/Start 5.26
287 TestNoKubernetes/serial/StartNoArgs 5.29
289 TestNetworkPlugins/group/auto/Start 9.74
290 TestNetworkPlugins/group/calico/Start 9.75
292 TestNetworkPlugins/group/custom-flannel/Start 9.78
293 TestNetworkPlugins/group/false/Start 9.96
294 TestNetworkPlugins/group/kindnet/Start 10.47
295 TestNetworkPlugins/group/flannel/Start 9.98
296 TestNetworkPlugins/group/enable-default-cni/Start 9.87
297 TestNetworkPlugins/group/bridge/Start 10.14
298 TestNetworkPlugins/group/kubenet/Start 10.2
300 TestStartStop/group/old-k8s-version/serial/FirstStart 9.94
302 TestStartStop/group/no-preload/serial/FirstStart 9.92
303 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
304 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
307 TestStartStop/group/old-k8s-version/serial/SecondStart 7.32
308 TestStartStop/group/no-preload/serial/DeployApp 0.09
309 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
310 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
311 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.13
312 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
313 TestStartStop/group/old-k8s-version/serial/Pause 0.11
316 TestStartStop/group/embed-certs/serial/FirstStart 9.89
318 TestStartStop/group/no-preload/serial/SecondStart 6.52
319 TestStartStop/group/embed-certs/serial/DeployApp 0.1
320 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
321 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
323 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
324 TestStartStop/group/no-preload/serial/Pause 0.11
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.87
329 TestStartStop/group/embed-certs/serial/SecondStart 6.5
330 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
331 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
332 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
335 TestStartStop/group/embed-certs/serial/Pause 0.11
338 TestStartStop/group/newest-cni/serial/FirstStart 9.88
340 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.97
341 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
342 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
343 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
344 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
349 TestStartStop/group/newest-cni/serial/SecondStart 5.24
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
353 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (11.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-914000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-914000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (11.121542875s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"267e06e9-e5f9-41c4-b109-57c2f5acb4b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-914000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"90752d49-48f4-4718-a341-dd81795a8fb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19307"}}
	{"specversion":"1.0","id":"ae0e48b4-3001-490d-b3f3-0fba43be5f91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig"}}
	{"specversion":"1.0","id":"5ed184d6-5b96-4925-a58c-9558d50902ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"163f6dae-ce24-47e5-83bd-77de83609222","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6016ab88-8bde-42f3-8921-18a21d431fb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube"}}
	{"specversion":"1.0","id":"51670bdb-54f3-43cd-9893-6e4e37e00cd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"bf01fad3-a67d-45e8-8d5c-e47420dd3329","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"da11995d-b741-4678-b97b-ad4067382919","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"e5964324-f08c-462a-839a-fc7bb15797d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"bb248781-87d8-4a00-a0e2-4688430ef021","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-914000\" primary control-plane node in \"download-only-914000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"aa84a3c7-23fa-485b-9655-2ffe5afb48b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f43f6d5f-19f2-4eb7-b64f-f45a2a361841","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108a0da60 0x108a0da60 0x108a0da60 0x108a0da60 0x108a0da60 0x108a0da60 0x108a0da60] Decompressors:map[bz2:0x14000168830 gz:0x14000168838 tar:0x140001687e0 tar.bz2:0x140001687f0 tar.gz:0x14000168800 tar.xz:0x14000168810 tar.zst:0x14000168820 tbz2:0x140001687f0 tgz:0x14
000168800 txz:0x14000168810 tzst:0x14000168820 xz:0x14000168840 zip:0x14000168850 zst:0x14000168848] Getters:map[file:0x1400077e6d0 http:0x140008b6190 https:0x140008b61e0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"3b7fe58a-fa0c-4390-b0fd-6d9b7ee47c0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:12:54.793152    1568 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:12:54.793312    1568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:12:54.793315    1568 out.go:304] Setting ErrFile to fd 2...
	I0719 11:12:54.793318    1568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:12:54.793453    1568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	W0719 11:12:54.793535    1568 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19307-1066/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19307-1066/.minikube/config/config.json: no such file or directory
	I0719 11:12:54.794759    1568 out.go:298] Setting JSON to true
	I0719 11:12:54.812311    1568 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":737,"bootTime":1721412037,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 11:12:54.812372    1568 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:12:54.817717    1568 out.go:97] [download-only-914000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 11:12:54.817855    1568 notify.go:220] Checking for updates...
	W0719 11:12:54.817911    1568 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball: no such file or directory
	I0719 11:12:54.820686    1568 out.go:169] MINIKUBE_LOCATION=19307
	I0719 11:12:54.827704    1568 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:12:54.830764    1568 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 11:12:54.833745    1568 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:12:54.841780    1568 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	W0719 11:12:54.847714    1568 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 11:12:54.847925    1568 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:12:54.851800    1568 out.go:97] Using the qemu2 driver based on user configuration
	I0719 11:12:54.851822    1568 start.go:297] selected driver: qemu2
	I0719 11:12:54.851837    1568 start.go:901] validating driver "qemu2" against <nil>
	I0719 11:12:54.851932    1568 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 11:12:54.854710    1568 out.go:169] Automatically selected the socket_vmnet network
	I0719 11:12:54.860307    1568 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0719 11:12:54.860380    1568 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 11:12:54.860418    1568 cni.go:84] Creating CNI manager for ""
	I0719 11:12:54.860423    1568 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 11:12:54.860479    1568 start.go:340] cluster config:
	{Name:download-only-914000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-914000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:12:54.865604    1568 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:12:54.869558    1568 out.go:97] Downloading VM boot image ...
	I0719 11:12:54.869573    1568 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso
	I0719 11:12:59.388201    1568 out.go:97] Starting "download-only-914000" primary control-plane node in "download-only-914000" cluster
	I0719 11:12:59.388240    1568 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 11:12:59.442350    1568 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0719 11:12:59.442374    1568 cache.go:56] Caching tarball of preloaded images
	I0719 11:12:59.442522    1568 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 11:12:59.447613    1568 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0719 11:12:59.447620    1568 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 11:12:59.530447    1568 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0719 11:13:04.743702    1568 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 11:13:04.743880    1568 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 11:13:05.439835    1568 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0719 11:13:05.440029    1568 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/download-only-914000/config.json ...
	I0719 11:13:05.440059    1568 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/download-only-914000/config.json: {Name:mk0c144cee678b853797870f94b425b1e9982c7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 11:13:05.440307    1568 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 11:13:05.440494    1568 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0719 11:13:05.839641    1568 out.go:169] 
	W0719 11:13:05.845869    1568 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108a0da60 0x108a0da60 0x108a0da60 0x108a0da60 0x108a0da60 0x108a0da60 0x108a0da60] Decompressors:map[bz2:0x14000168830 gz:0x14000168838 tar:0x140001687e0 tar.bz2:0x140001687f0 tar.gz:0x14000168800 tar.xz:0x14000168810 tar.zst:0x14000168820 tbz2:0x140001687f0 tgz:0x14000168800 txz:0x14000168810 tzst:0x14000168820 xz:0x14000168840 zip:0x14000168850 zst:0x14000168848] Getters:map[file:0x1400077e6d0 http:0x140008b6190 https:0x140008b61e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0719 11:13:05.845894    1568 out_reason.go:110] 
	W0719 11:13:05.853737    1568 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 11:13:05.857675    1568 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-914000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (11.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.91s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-522000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-522000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.7599095s)

                                                
                                                
-- stdout --
	* [offline-docker-522000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-522000" primary control-plane node in "offline-docker-522000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-522000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:54:31.594123    3794 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:54:31.594644    3794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:54:31.594655    3794 out.go:304] Setting ErrFile to fd 2...
	I0719 11:54:31.594662    3794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:54:31.595060    3794 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:54:31.596668    3794 out.go:298] Setting JSON to false
	I0719 11:54:31.614120    3794 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3234,"bootTime":1721412037,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 11:54:31.614207    3794 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:54:31.619462    3794 out.go:177] * [offline-docker-522000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 11:54:31.627281    3794 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 11:54:31.627305    3794 notify.go:220] Checking for updates...
	I0719 11:54:31.635272    3794 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:54:31.638273    3794 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 11:54:31.641311    3794 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:54:31.642474    3794 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 11:54:31.645357    3794 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 11:54:31.648624    3794 config.go:182] Loaded profile config "multinode-281000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:54:31.648700    3794 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:54:31.652101    3794 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 11:54:31.659295    3794 start.go:297] selected driver: qemu2
	I0719 11:54:31.659306    3794 start.go:901] validating driver "qemu2" against <nil>
	I0719 11:54:31.659314    3794 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 11:54:31.661232    3794 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 11:54:31.664327    3794 out.go:177] * Automatically selected the socket_vmnet network
	I0719 11:54:31.667413    3794 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 11:54:31.667450    3794 cni.go:84] Creating CNI manager for ""
	I0719 11:54:31.667456    3794 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 11:54:31.667464    3794 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 11:54:31.667501    3794 start.go:340] cluster config:
	{Name:offline-docker-522000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-522000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:54:31.671092    3794 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:54:31.677194    3794 out.go:177] * Starting "offline-docker-522000" primary control-plane node in "offline-docker-522000" cluster
	I0719 11:54:31.681214    3794 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 11:54:31.681247    3794 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 11:54:31.681259    3794 cache.go:56] Caching tarball of preloaded images
	I0719 11:54:31.681327    3794 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 11:54:31.681332    3794 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 11:54:31.681409    3794 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/offline-docker-522000/config.json ...
	I0719 11:54:31.681421    3794 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/offline-docker-522000/config.json: {Name:mkc0b51d2d00bd7a02f80360bfc90ab762c11a16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 11:54:31.681683    3794 start.go:360] acquireMachinesLock for offline-docker-522000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:54:31.681718    3794 start.go:364] duration metric: took 25.417µs to acquireMachinesLock for "offline-docker-522000"
	I0719 11:54:31.681728    3794 start.go:93] Provisioning new machine with config: &{Name:offline-docker-522000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-522000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 11:54:31.681760    3794 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 11:54:31.685246    3794 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 11:54:31.700988    3794 start.go:159] libmachine.API.Create for "offline-docker-522000" (driver="qemu2")
	I0719 11:54:31.701018    3794 client.go:168] LocalClient.Create starting
	I0719 11:54:31.701102    3794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 11:54:31.701135    3794 main.go:141] libmachine: Decoding PEM data...
	I0719 11:54:31.701148    3794 main.go:141] libmachine: Parsing certificate...
	I0719 11:54:31.701194    3794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 11:54:31.701216    3794 main.go:141] libmachine: Decoding PEM data...
	I0719 11:54:31.701224    3794 main.go:141] libmachine: Parsing certificate...
	I0719 11:54:31.701730    3794 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 11:54:31.843048    3794 main.go:141] libmachine: Creating SSH key...
	I0719 11:54:31.972345    3794 main.go:141] libmachine: Creating Disk image...
	I0719 11:54:31.972355    3794 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 11:54:31.972613    3794 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/offline-docker-522000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/offline-docker-522000/disk.qcow2
	I0719 11:54:31.982501    3794 main.go:141] libmachine: STDOUT: 
	I0719 11:54:31.982523    3794 main.go:141] libmachine: STDERR: 
	I0719 11:54:31.982576    3794 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/offline-docker-522000/disk.qcow2 +20000M
	I0719 11:54:31.992927    3794 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 11:54:31.992944    3794 main.go:141] libmachine: STDERR: 
	I0719 11:54:31.992957    3794 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/offline-docker-522000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/offline-docker-522000/disk.qcow2
	I0719 11:54:31.992963    3794 main.go:141] libmachine: Starting QEMU VM...
	I0719 11:54:31.992976    3794 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:54:31.993003    3794 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/offline-docker-522000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/offline-docker-522000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/offline-docker-522000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:17:c8:fa:a1:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/offline-docker-522000/disk.qcow2
	I0719 11:54:31.994755    3794 main.go:141] libmachine: STDOUT: 
	I0719 11:54:31.994772    3794 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:54:31.994790    3794 client.go:171] duration metric: took 293.773292ms to LocalClient.Create
	I0719 11:54:33.995457    3794 start.go:128] duration metric: took 2.313679292s to createHost
	I0719 11:54:33.995474    3794 start.go:83] releasing machines lock for "offline-docker-522000", held for 2.313782625s
	W0719 11:54:33.995492    3794 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:54:34.001640    3794 out.go:177] * Deleting "offline-docker-522000" in qemu2 ...
	W0719 11:54:34.015158    3794 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:54:34.015170    3794 start.go:729] Will try again in 5 seconds ...
	I0719 11:54:39.017271    3794 start.go:360] acquireMachinesLock for offline-docker-522000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:54:39.017512    3794 start.go:364] duration metric: took 170.042µs to acquireMachinesLock for "offline-docker-522000"
	I0719 11:54:39.017573    3794 start.go:93] Provisioning new machine with config: &{Name:offline-docker-522000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-522000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 11:54:39.017675    3794 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 11:54:39.026389    3794 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 11:54:39.062200    3794 start.go:159] libmachine.API.Create for "offline-docker-522000" (driver="qemu2")
	I0719 11:54:39.062244    3794 client.go:168] LocalClient.Create starting
	I0719 11:54:39.062343    3794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 11:54:39.062393    3794 main.go:141] libmachine: Decoding PEM data...
	I0719 11:54:39.062412    3794 main.go:141] libmachine: Parsing certificate...
	I0719 11:54:39.062467    3794 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 11:54:39.062505    3794 main.go:141] libmachine: Decoding PEM data...
	I0719 11:54:39.062524    3794 main.go:141] libmachine: Parsing certificate...
	I0719 11:54:39.063081    3794 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 11:54:39.210850    3794 main.go:141] libmachine: Creating SSH key...
	I0719 11:54:39.261667    3794 main.go:141] libmachine: Creating Disk image...
	I0719 11:54:39.261673    3794 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 11:54:39.261841    3794 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/offline-docker-522000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/offline-docker-522000/disk.qcow2
	I0719 11:54:39.271105    3794 main.go:141] libmachine: STDOUT: 
	I0719 11:54:39.271117    3794 main.go:141] libmachine: STDERR: 
	I0719 11:54:39.271173    3794 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/offline-docker-522000/disk.qcow2 +20000M
	I0719 11:54:39.278887    3794 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 11:54:39.278903    3794 main.go:141] libmachine: STDERR: 
	I0719 11:54:39.278914    3794 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/offline-docker-522000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/offline-docker-522000/disk.qcow2
	I0719 11:54:39.278918    3794 main.go:141] libmachine: Starting QEMU VM...
	I0719 11:54:39.278927    3794 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:54:39.278952    3794 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/offline-docker-522000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/offline-docker-522000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/offline-docker-522000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:a6:76:2a:b9:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/offline-docker-522000/disk.qcow2
	I0719 11:54:39.280553    3794 main.go:141] libmachine: STDOUT: 
	I0719 11:54:39.280568    3794 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:54:39.280581    3794 client.go:171] duration metric: took 218.334584ms to LocalClient.Create
	I0719 11:54:41.282725    3794 start.go:128] duration metric: took 2.265048875s to createHost
	I0719 11:54:41.282794    3794 start.go:83] releasing machines lock for "offline-docker-522000", held for 2.265297041s
	W0719 11:54:41.283173    3794 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-522000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-522000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:54:41.295821    3794 out.go:177] 
	W0719 11:54:41.298854    3794 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 11:54:41.298909    3794 out.go:239] * 
	* 
	W0719 11:54:41.301644    3794 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 11:54:41.310798    3794 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-522000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-19 11:54:41.326067 -0700 PDT m=+2506.658366709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-522000 -n offline-docker-522000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-522000 -n offline-docker-522000: exit status 7 (69.171125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-522000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-522000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-522000
--- FAIL: TestOffline (9.91s)

                                                
                                    
x
+
TestCertOptions (10.21s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-808000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-808000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.951957833s)

                                                
                                                
-- stdout --
	* [cert-options-808000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-808000" primary control-plane node in "cert-options-808000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-808000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-808000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-808000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-808000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-808000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (77.878583ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-808000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-808000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-808000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-808000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-808000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-808000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (39.977375ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-808000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-808000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-808000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-808000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-808000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-19 11:55:12.495465 -0700 PDT m=+2537.828190084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-808000 -n cert-options-808000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-808000 -n cert-options-808000: exit status 7 (29.49525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-808000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-808000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-808000
--- FAIL: TestCertOptions (10.21s)

                                                
                                    
x
+
TestCertExpiration (195.28s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-532000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-532000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.919724542s)

                                                
                                                
-- stdout --
	* [cert-expiration-532000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-532000" primary control-plane node in "cert-expiration-532000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-532000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-532000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-532000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-532000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-532000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.224625458s)

                                                
                                                
-- stdout --
	* [cert-expiration-532000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-532000" primary control-plane node in "cert-expiration-532000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-532000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-532000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-532000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-532000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-532000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-532000" primary control-plane node in "cert-expiration-532000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-532000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-532000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-532000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-19 11:58:12.514992 -0700 PDT m=+2717.850175209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-532000 -n cert-expiration-532000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-532000 -n cert-expiration-532000: exit status 7 (51.685791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-532000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-532000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-532000
--- FAIL: TestCertExpiration (195.28s)

                                                
                                    
x
+
TestDockerFlags (10.19s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-036000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
E0719 11:54:52.585789    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-036000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.960834083s)

                                                
                                                
-- stdout --
	* [docker-flags-036000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-036000" primary control-plane node in "docker-flags-036000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-036000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:54:52.227776    3987 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:54:52.227918    3987 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:54:52.227922    3987 out.go:304] Setting ErrFile to fd 2...
	I0719 11:54:52.227924    3987 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:54:52.228043    3987 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:54:52.229055    3987 out.go:298] Setting JSON to false
	I0719 11:54:52.245123    3987 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3255,"bootTime":1721412037,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 11:54:52.245214    3987 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:54:52.250178    3987 out.go:177] * [docker-flags-036000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 11:54:52.257056    3987 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 11:54:52.257086    3987 notify.go:220] Checking for updates...
	I0719 11:54:52.264061    3987 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:54:52.267063    3987 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 11:54:52.270083    3987 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:54:52.273003    3987 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 11:54:52.276089    3987 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 11:54:52.279551    3987 config.go:182] Loaded profile config "force-systemd-flag-729000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:54:52.279615    3987 config.go:182] Loaded profile config "multinode-281000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:54:52.279660    3987 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:54:52.284006    3987 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 11:54:52.291085    3987 start.go:297] selected driver: qemu2
	I0719 11:54:52.291092    3987 start.go:901] validating driver "qemu2" against <nil>
	I0719 11:54:52.291100    3987 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 11:54:52.293475    3987 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 11:54:52.296040    3987 out.go:177] * Automatically selected the socket_vmnet network
	I0719 11:54:52.299121    3987 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0719 11:54:52.299144    3987 cni.go:84] Creating CNI manager for ""
	I0719 11:54:52.299155    3987 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 11:54:52.299160    3987 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 11:54:52.299202    3987 start.go:340] cluster config:
	{Name:docker-flags-036000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-036000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:54:52.302969    3987 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:54:52.310023    3987 out.go:177] * Starting "docker-flags-036000" primary control-plane node in "docker-flags-036000" cluster
	I0719 11:54:52.314087    3987 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 11:54:52.314103    3987 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 11:54:52.314116    3987 cache.go:56] Caching tarball of preloaded images
	I0719 11:54:52.314200    3987 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 11:54:52.314206    3987 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 11:54:52.314264    3987 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/docker-flags-036000/config.json ...
	I0719 11:54:52.314283    3987 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/docker-flags-036000/config.json: {Name:mk016514dd44313daad9c5d3761b88b01117ba92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 11:54:52.314501    3987 start.go:360] acquireMachinesLock for docker-flags-036000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:54:52.314544    3987 start.go:364] duration metric: took 34.667µs to acquireMachinesLock for "docker-flags-036000"
	I0719 11:54:52.314555    3987 start.go:93] Provisioning new machine with config: &{Name:docker-flags-036000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-036000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 11:54:52.314582    3987 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 11:54:52.323087    3987 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 11:54:52.341345    3987 start.go:159] libmachine.API.Create for "docker-flags-036000" (driver="qemu2")
	I0719 11:54:52.341377    3987 client.go:168] LocalClient.Create starting
	I0719 11:54:52.341442    3987 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 11:54:52.341475    3987 main.go:141] libmachine: Decoding PEM data...
	I0719 11:54:52.341489    3987 main.go:141] libmachine: Parsing certificate...
	I0719 11:54:52.341530    3987 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 11:54:52.341558    3987 main.go:141] libmachine: Decoding PEM data...
	I0719 11:54:52.341566    3987 main.go:141] libmachine: Parsing certificate...
	I0719 11:54:52.341984    3987 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 11:54:52.487685    3987 main.go:141] libmachine: Creating SSH key...
	I0719 11:54:52.567629    3987 main.go:141] libmachine: Creating Disk image...
	I0719 11:54:52.567635    3987 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 11:54:52.567797    3987 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/docker-flags-036000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/docker-flags-036000/disk.qcow2
	I0719 11:54:52.577063    3987 main.go:141] libmachine: STDOUT: 
	I0719 11:54:52.577080    3987 main.go:141] libmachine: STDERR: 
	I0719 11:54:52.577125    3987 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/docker-flags-036000/disk.qcow2 +20000M
	I0719 11:54:52.584871    3987 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 11:54:52.584887    3987 main.go:141] libmachine: STDERR: 
	I0719 11:54:52.584908    3987 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/docker-flags-036000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/docker-flags-036000/disk.qcow2
	I0719 11:54:52.584911    3987 main.go:141] libmachine: Starting QEMU VM...
	I0719 11:54:52.584924    3987 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:54:52.584949    3987 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/docker-flags-036000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/docker-flags-036000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/docker-flags-036000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:da:0a:10:01:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/docker-flags-036000/disk.qcow2
	I0719 11:54:52.586573    3987 main.go:141] libmachine: STDOUT: 
	I0719 11:54:52.586587    3987 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:54:52.586603    3987 client.go:171] duration metric: took 245.225875ms to LocalClient.Create
	I0719 11:54:54.588746    3987 start.go:128] duration metric: took 2.274173292s to createHost
	I0719 11:54:54.588796    3987 start.go:83] releasing machines lock for "docker-flags-036000", held for 2.274270084s
	W0719 11:54:54.588860    3987 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:54:54.601991    3987 out.go:177] * Deleting "docker-flags-036000" in qemu2 ...
	W0719 11:54:54.626883    3987 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:54:54.626906    3987 start.go:729] Will try again in 5 seconds ...
	I0719 11:54:59.629034    3987 start.go:360] acquireMachinesLock for docker-flags-036000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:54:59.756704    3987 start.go:364] duration metric: took 127.48475ms to acquireMachinesLock for "docker-flags-036000"
	I0719 11:54:59.756834    3987 start.go:93] Provisioning new machine with config: &{Name:docker-flags-036000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-036000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 11:54:59.757089    3987 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 11:54:59.770829    3987 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 11:54:59.822334    3987 start.go:159] libmachine.API.Create for "docker-flags-036000" (driver="qemu2")
	I0719 11:54:59.822391    3987 client.go:168] LocalClient.Create starting
	I0719 11:54:59.822515    3987 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 11:54:59.822575    3987 main.go:141] libmachine: Decoding PEM data...
	I0719 11:54:59.822592    3987 main.go:141] libmachine: Parsing certificate...
	I0719 11:54:59.822651    3987 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 11:54:59.822695    3987 main.go:141] libmachine: Decoding PEM data...
	I0719 11:54:59.822705    3987 main.go:141] libmachine: Parsing certificate...
	I0719 11:54:59.823205    3987 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 11:54:59.976123    3987 main.go:141] libmachine: Creating SSH key...
	I0719 11:55:00.094945    3987 main.go:141] libmachine: Creating Disk image...
	I0719 11:55:00.094952    3987 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 11:55:00.095127    3987 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/docker-flags-036000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/docker-flags-036000/disk.qcow2
	I0719 11:55:00.104122    3987 main.go:141] libmachine: STDOUT: 
	I0719 11:55:00.104141    3987 main.go:141] libmachine: STDERR: 
	I0719 11:55:00.104197    3987 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/docker-flags-036000/disk.qcow2 +20000M
	I0719 11:55:00.112071    3987 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 11:55:00.112089    3987 main.go:141] libmachine: STDERR: 
	I0719 11:55:00.112108    3987 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/docker-flags-036000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/docker-flags-036000/disk.qcow2
	I0719 11:55:00.112113    3987 main.go:141] libmachine: Starting QEMU VM...
	I0719 11:55:00.112124    3987 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:55:00.112151    3987 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/docker-flags-036000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/docker-flags-036000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/docker-flags-036000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:59:d7:cf:83:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/docker-flags-036000/disk.qcow2
	I0719 11:55:00.113881    3987 main.go:141] libmachine: STDOUT: 
	I0719 11:55:00.113894    3987 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:55:00.113909    3987 client.go:171] duration metric: took 291.516416ms to LocalClient.Create
	I0719 11:55:02.116058    3987 start.go:128] duration metric: took 2.358966875s to createHost
	I0719 11:55:02.116108    3987 start.go:83] releasing machines lock for "docker-flags-036000", held for 2.359358917s
	W0719 11:55:02.116476    3987 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-036000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-036000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:55:02.131108    3987 out.go:177] 
	W0719 11:55:02.137025    3987 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 11:55:02.137101    3987 out.go:239] * 
	* 
	W0719 11:55:02.139569    3987 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 11:55:02.146986    3987 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-036000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-036000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-036000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (76.701833ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-036000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-036000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-036000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-036000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-036000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-036000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-036000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-036000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-036000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.630583ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-036000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-036000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-036000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-036000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-036000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-036000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-19 11:55:02.286868 -0700 PDT m=+2527.619453542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-036000 -n docker-flags-036000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-036000 -n docker-flags-036000: exit status 7 (29.258458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-036000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-036000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-036000
--- FAIL: TestDockerFlags (10.19s)

                                                
                                    
x
+
TestForceSystemdFlag (10.04s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-729000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-729000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.85518725s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-729000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-729000" primary control-plane node in "force-systemd-flag-729000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-729000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:54:47.353637    3966 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:54:47.353765    3966 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:54:47.353768    3966 out.go:304] Setting ErrFile to fd 2...
	I0719 11:54:47.353770    3966 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:54:47.353902    3966 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:54:47.354911    3966 out.go:298] Setting JSON to false
	I0719 11:54:47.370761    3966 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3250,"bootTime":1721412037,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 11:54:47.370863    3966 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:54:47.376838    3966 out.go:177] * [force-systemd-flag-729000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 11:54:47.383874    3966 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 11:54:47.383915    3966 notify.go:220] Checking for updates...
	I0719 11:54:47.391767    3966 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:54:47.394829    3966 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 11:54:47.397856    3966 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:54:47.400779    3966 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 11:54:47.403815    3966 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 11:54:47.407215    3966 config.go:182] Loaded profile config "force-systemd-env-164000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:54:47.407290    3966 config.go:182] Loaded profile config "multinode-281000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:54:47.407352    3966 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:54:47.411798    3966 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 11:54:47.418774    3966 start.go:297] selected driver: qemu2
	I0719 11:54:47.418779    3966 start.go:901] validating driver "qemu2" against <nil>
	I0719 11:54:47.418785    3966 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 11:54:47.420996    3966 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 11:54:47.425858    3966 out.go:177] * Automatically selected the socket_vmnet network
	I0719 11:54:47.427331    3966 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 11:54:47.427357    3966 cni.go:84] Creating CNI manager for ""
	I0719 11:54:47.427365    3966 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 11:54:47.427372    3966 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 11:54:47.427407    3966 start.go:340] cluster config:
	{Name:force-systemd-flag-729000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-729000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:54:47.431047    3966 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:54:47.438825    3966 out.go:177] * Starting "force-systemd-flag-729000" primary control-plane node in "force-systemd-flag-729000" cluster
	I0719 11:54:47.442811    3966 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 11:54:47.442826    3966 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 11:54:47.442834    3966 cache.go:56] Caching tarball of preloaded images
	I0719 11:54:47.442896    3966 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 11:54:47.442902    3966 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 11:54:47.442954    3966 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/force-systemd-flag-729000/config.json ...
	I0719 11:54:47.442967    3966 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/force-systemd-flag-729000/config.json: {Name:mk959e00b44000879b47da423b6ae6132cd62113 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 11:54:47.443198    3966 start.go:360] acquireMachinesLock for force-systemd-flag-729000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:54:47.443237    3966 start.go:364] duration metric: took 29.458µs to acquireMachinesLock for "force-systemd-flag-729000"
	I0719 11:54:47.443248    3966 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-729000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-729000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 11:54:47.443286    3966 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 11:54:47.450786    3966 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 11:54:47.468692    3966 start.go:159] libmachine.API.Create for "force-systemd-flag-729000" (driver="qemu2")
	I0719 11:54:47.468732    3966 client.go:168] LocalClient.Create starting
	I0719 11:54:47.468804    3966 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 11:54:47.468836    3966 main.go:141] libmachine: Decoding PEM data...
	I0719 11:54:47.468845    3966 main.go:141] libmachine: Parsing certificate...
	I0719 11:54:47.468889    3966 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 11:54:47.468913    3966 main.go:141] libmachine: Decoding PEM data...
	I0719 11:54:47.468922    3966 main.go:141] libmachine: Parsing certificate...
	I0719 11:54:47.469267    3966 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 11:54:47.610375    3966 main.go:141] libmachine: Creating SSH key...
	I0719 11:54:47.643623    3966 main.go:141] libmachine: Creating Disk image...
	I0719 11:54:47.643629    3966 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 11:54:47.643777    3966 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-flag-729000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-flag-729000/disk.qcow2
	I0719 11:54:47.653048    3966 main.go:141] libmachine: STDOUT: 
	I0719 11:54:47.653065    3966 main.go:141] libmachine: STDERR: 
	I0719 11:54:47.653116    3966 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-flag-729000/disk.qcow2 +20000M
	I0719 11:54:47.660878    3966 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 11:54:47.660891    3966 main.go:141] libmachine: STDERR: 
	I0719 11:54:47.660912    3966 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-flag-729000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-flag-729000/disk.qcow2
	I0719 11:54:47.660928    3966 main.go:141] libmachine: Starting QEMU VM...
	I0719 11:54:47.660942    3966 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:54:47.660971    3966 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-flag-729000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-flag-729000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-flag-729000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:28:8a:eb:0b:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-flag-729000/disk.qcow2
	I0719 11:54:47.662538    3966 main.go:141] libmachine: STDOUT: 
	I0719 11:54:47.662552    3966 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:54:47.662567    3966 client.go:171] duration metric: took 193.8345ms to LocalClient.Create
	I0719 11:54:49.664719    3966 start.go:128] duration metric: took 2.221437583s to createHost
	I0719 11:54:49.664763    3966 start.go:83] releasing machines lock for "force-systemd-flag-729000", held for 2.221546917s
	W0719 11:54:49.664836    3966 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:54:49.692754    3966 out.go:177] * Deleting "force-systemd-flag-729000" in qemu2 ...
	W0719 11:54:49.712638    3966 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:54:49.712666    3966 start.go:729] Will try again in 5 seconds ...
	I0719 11:54:54.714762    3966 start.go:360] acquireMachinesLock for force-systemd-flag-729000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:54:54.715230    3966 start.go:364] duration metric: took 359µs to acquireMachinesLock for "force-systemd-flag-729000"
	I0719 11:54:54.715369    3966 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-729000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-729000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 11:54:54.715695    3966 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 11:54:54.724117    3966 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 11:54:54.774369    3966 start.go:159] libmachine.API.Create for "force-systemd-flag-729000" (driver="qemu2")
	I0719 11:54:54.774551    3966 client.go:168] LocalClient.Create starting
	I0719 11:54:54.774660    3966 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 11:54:54.774737    3966 main.go:141] libmachine: Decoding PEM data...
	I0719 11:54:54.774752    3966 main.go:141] libmachine: Parsing certificate...
	I0719 11:54:54.774806    3966 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 11:54:54.774851    3966 main.go:141] libmachine: Decoding PEM data...
	I0719 11:54:54.774865    3966 main.go:141] libmachine: Parsing certificate...
	I0719 11:54:54.776058    3966 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 11:54:54.931506    3966 main.go:141] libmachine: Creating SSH key...
	I0719 11:54:55.123205    3966 main.go:141] libmachine: Creating Disk image...
	I0719 11:54:55.123211    3966 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 11:54:55.123404    3966 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-flag-729000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-flag-729000/disk.qcow2
	I0719 11:54:55.132921    3966 main.go:141] libmachine: STDOUT: 
	I0719 11:54:55.132945    3966 main.go:141] libmachine: STDERR: 
	I0719 11:54:55.133018    3966 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-flag-729000/disk.qcow2 +20000M
	I0719 11:54:55.140804    3966 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 11:54:55.140816    3966 main.go:141] libmachine: STDERR: 
	I0719 11:54:55.140838    3966 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-flag-729000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-flag-729000/disk.qcow2
	I0719 11:54:55.140844    3966 main.go:141] libmachine: Starting QEMU VM...
	I0719 11:54:55.140857    3966 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:54:55.140890    3966 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-flag-729000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-flag-729000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-flag-729000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:17:f6:db:cb:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-flag-729000/disk.qcow2
	I0719 11:54:55.142542    3966 main.go:141] libmachine: STDOUT: 
	I0719 11:54:55.142556    3966 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:54:55.142569    3966 client.go:171] duration metric: took 368.018208ms to LocalClient.Create
	I0719 11:54:57.144720    3966 start.go:128] duration metric: took 2.429011459s to createHost
	I0719 11:54:57.144764    3966 start.go:83] releasing machines lock for "force-systemd-flag-729000", held for 2.429528125s
	W0719 11:54:57.145123    3966 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-729000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-729000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:54:57.153743    3966 out.go:177] 
	W0719 11:54:57.157716    3966 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 11:54:57.157740    3966 out.go:239] * 
	* 
	W0719 11:54:57.160712    3966 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 11:54:57.168664    3966 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-729000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-729000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-729000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (72.166292ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-729000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-729000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-729000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-19 11:54:57.259698 -0700 PDT m=+2522.592215126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-729000 -n force-systemd-flag-729000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-729000 -n force-systemd-flag-729000: exit status 7 (34.711709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-729000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-729000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-729000
--- FAIL: TestForceSystemdFlag (10.04s)

                                                
                                    
x
+
TestForceSystemdEnv (10.73s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-164000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-164000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.542160083s)

                                                
                                                
-- stdout --
	* [force-systemd-env-164000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-164000" primary control-plane node in "force-systemd-env-164000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-164000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:54:41.503463    3931 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:54:41.503599    3931 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:54:41.503603    3931 out.go:304] Setting ErrFile to fd 2...
	I0719 11:54:41.503605    3931 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:54:41.503739    3931 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:54:41.504776    3931 out.go:298] Setting JSON to false
	I0719 11:54:41.521077    3931 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3244,"bootTime":1721412037,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 11:54:41.521148    3931 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:54:41.527420    3931 out.go:177] * [force-systemd-env-164000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 11:54:41.534280    3931 notify.go:220] Checking for updates...
	I0719 11:54:41.539295    3931 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 11:54:41.543352    3931 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:54:41.551292    3931 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 11:54:41.555322    3931 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:54:41.562264    3931 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 11:54:41.570300    3931 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0719 11:54:41.574677    3931 config.go:182] Loaded profile config "multinode-281000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:54:41.574723    3931 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:54:41.578373    3931 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 11:54:41.585320    3931 start.go:297] selected driver: qemu2
	I0719 11:54:41.585326    3931 start.go:901] validating driver "qemu2" against <nil>
	I0719 11:54:41.585331    3931 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 11:54:41.587655    3931 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 11:54:41.591300    3931 out.go:177] * Automatically selected the socket_vmnet network
	I0719 11:54:41.595450    3931 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 11:54:41.595465    3931 cni.go:84] Creating CNI manager for ""
	I0719 11:54:41.595475    3931 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 11:54:41.595484    3931 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 11:54:41.595525    3931 start.go:340] cluster config:
	{Name:force-systemd-env-164000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-164000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:54:41.599340    3931 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:54:41.606313    3931 out.go:177] * Starting "force-systemd-env-164000" primary control-plane node in "force-systemd-env-164000" cluster
	I0719 11:54:41.610341    3931 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 11:54:41.610355    3931 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 11:54:41.610367    3931 cache.go:56] Caching tarball of preloaded images
	I0719 11:54:41.610426    3931 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 11:54:41.610431    3931 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 11:54:41.610485    3931 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/force-systemd-env-164000/config.json ...
	I0719 11:54:41.610499    3931 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/force-systemd-env-164000/config.json: {Name:mke7568fccd84f0646d46ad5d8453c75878653d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 11:54:41.610691    3931 start.go:360] acquireMachinesLock for force-systemd-env-164000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:54:41.610725    3931 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "force-systemd-env-164000"
	I0719 11:54:41.610741    3931 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-164000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-164000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 11:54:41.610773    3931 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 11:54:41.619297    3931 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 11:54:41.636021    3931 start.go:159] libmachine.API.Create for "force-systemd-env-164000" (driver="qemu2")
	I0719 11:54:41.636045    3931 client.go:168] LocalClient.Create starting
	I0719 11:54:41.636103    3931 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 11:54:41.636141    3931 main.go:141] libmachine: Decoding PEM data...
	I0719 11:54:41.636153    3931 main.go:141] libmachine: Parsing certificate...
	I0719 11:54:41.636191    3931 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 11:54:41.636213    3931 main.go:141] libmachine: Decoding PEM data...
	I0719 11:54:41.636231    3931 main.go:141] libmachine: Parsing certificate...
	I0719 11:54:41.636567    3931 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 11:54:41.776864    3931 main.go:141] libmachine: Creating SSH key...
	I0719 11:54:41.894545    3931 main.go:141] libmachine: Creating Disk image...
	I0719 11:54:41.894556    3931 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 11:54:41.894725    3931 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-env-164000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-env-164000/disk.qcow2
	I0719 11:54:41.904272    3931 main.go:141] libmachine: STDOUT: 
	I0719 11:54:41.904293    3931 main.go:141] libmachine: STDERR: 
	I0719 11:54:41.904351    3931 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-env-164000/disk.qcow2 +20000M
	I0719 11:54:41.912579    3931 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 11:54:41.912593    3931 main.go:141] libmachine: STDERR: 
	I0719 11:54:41.912606    3931 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-env-164000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-env-164000/disk.qcow2
	I0719 11:54:41.912610    3931 main.go:141] libmachine: Starting QEMU VM...
	I0719 11:54:41.912623    3931 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:54:41.912648    3931 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-env-164000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-env-164000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-env-164000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:79:d8:58:3d:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-env-164000/disk.qcow2
	I0719 11:54:41.914288    3931 main.go:141] libmachine: STDOUT: 
	I0719 11:54:41.914302    3931 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:54:41.914323    3931 client.go:171] duration metric: took 278.278167ms to LocalClient.Create
	I0719 11:54:43.916425    3931 start.go:128] duration metric: took 2.305656125s to createHost
	I0719 11:54:43.916447    3931 start.go:83] releasing machines lock for "force-systemd-env-164000", held for 2.305749s
	W0719 11:54:43.916460    3931 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:54:43.926968    3931 out.go:177] * Deleting "force-systemd-env-164000" in qemu2 ...
	W0719 11:54:43.935848    3931 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:54:43.935859    3931 start.go:729] Will try again in 5 seconds ...
	I0719 11:54:48.938053    3931 start.go:360] acquireMachinesLock for force-systemd-env-164000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:54:49.664926    3931 start.go:364] duration metric: took 726.782041ms to acquireMachinesLock for "force-systemd-env-164000"
	I0719 11:54:49.665039    3931 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-164000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-164000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 11:54:49.665330    3931 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 11:54:49.679790    3931 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 11:54:49.731837    3931 start.go:159] libmachine.API.Create for "force-systemd-env-164000" (driver="qemu2")
	I0719 11:54:49.731892    3931 client.go:168] LocalClient.Create starting
	I0719 11:54:49.732023    3931 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 11:54:49.732080    3931 main.go:141] libmachine: Decoding PEM data...
	I0719 11:54:49.732098    3931 main.go:141] libmachine: Parsing certificate...
	I0719 11:54:49.732168    3931 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 11:54:49.732220    3931 main.go:141] libmachine: Decoding PEM data...
	I0719 11:54:49.732231    3931 main.go:141] libmachine: Parsing certificate...
	I0719 11:54:49.732775    3931 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 11:54:49.885011    3931 main.go:141] libmachine: Creating SSH key...
	I0719 11:54:49.956703    3931 main.go:141] libmachine: Creating Disk image...
	I0719 11:54:49.956708    3931 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 11:54:49.956879    3931 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-env-164000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-env-164000/disk.qcow2
	I0719 11:54:49.966416    3931 main.go:141] libmachine: STDOUT: 
	I0719 11:54:49.966433    3931 main.go:141] libmachine: STDERR: 
	I0719 11:54:49.966487    3931 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-env-164000/disk.qcow2 +20000M
	I0719 11:54:49.974333    3931 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 11:54:49.974347    3931 main.go:141] libmachine: STDERR: 
	I0719 11:54:49.974362    3931 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-env-164000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-env-164000/disk.qcow2
	I0719 11:54:49.974367    3931 main.go:141] libmachine: Starting QEMU VM...
	I0719 11:54:49.974377    3931 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:54:49.974401    3931 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-env-164000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-env-164000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-env-164000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:f7:26:39:b4:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/force-systemd-env-164000/disk.qcow2
	I0719 11:54:49.976061    3931 main.go:141] libmachine: STDOUT: 
	I0719 11:54:49.976075    3931 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:54:49.976086    3931 client.go:171] duration metric: took 244.191084ms to LocalClient.Create
	I0719 11:54:51.977888    3931 start.go:128] duration metric: took 2.31252425s to createHost
	I0719 11:54:51.977962    3931 start.go:83] releasing machines lock for "force-systemd-env-164000", held for 2.313002792s
	W0719 11:54:51.978282    3931 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-164000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-164000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:54:51.986924    3931 out.go:177] 
	W0719 11:54:51.990941    3931 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 11:54:51.990966    3931 out.go:239] * 
	* 
	W0719 11:54:51.993739    3931 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 11:54:52.002848    3931 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-164000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-164000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-164000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (74.528ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-164000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-164000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-164000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-19 11:54:52.09395 -0700 PDT m=+2517.426396459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-164000 -n force-systemd-env-164000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-164000 -n force-systemd-env-164000: exit status 7 (34.204833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-164000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-164000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-164000
--- FAIL: TestForceSystemdEnv (10.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (31.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-189000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-189000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-krmfp" [1a477d00-3549-441e-aa3c-2b58066c0f8a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-krmfp" [1a477d00-3549-441e-aa3c-2b58066c0f8a] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003802959s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:30431
functional_test.go:1657: error fetching http://192.168.105.4:30431: Get "http://192.168.105.4:30431": dial tcp 192.168.105.4:30431: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30431: Get "http://192.168.105.4:30431": dial tcp 192.168.105.4:30431: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30431: Get "http://192.168.105.4:30431": dial tcp 192.168.105.4:30431: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30431: Get "http://192.168.105.4:30431": dial tcp 192.168.105.4:30431: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30431: Get "http://192.168.105.4:30431": dial tcp 192.168.105.4:30431: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30431: Get "http://192.168.105.4:30431": dial tcp 192.168.105.4:30431: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:30431: Get "http://192.168.105.4:30431": dial tcp 192.168.105.4:30431: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:30431: Get "http://192.168.105.4:30431": dial tcp 192.168.105.4:30431: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-189000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-6f49f58cd5-krmfp
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-189000/192.168.105.4
Start Time:       Fri, 19 Jul 2024 11:23:42 -0700
Labels:           app=hello-node-connect
pod-template-hash=6f49f58cd5
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-6f49f58cd5
Containers:
echoserver-arm:
Container ID:   docker://300b9890130de7564ea03512d73556882290252dee65e3ed4b44ee51eaf8dc19
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Fri, 19 Jul 2024 11:23:55 -0700
Finished:     Fri, 19 Jul 2024 11:23:55 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zlp9t (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-zlp9t:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  30s                default-scheduler  Successfully assigned default/hello-node-connect-6f49f58cd5-krmfp to functional-189000
Normal   Pulled     17s (x3 over 30s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    17s (x3 over 30s)  kubelet            Created container echoserver-arm
Normal   Started    17s (x3 over 30s)  kubelet            Started container echoserver-arm
Warning  BackOff    4s (x3 over 28s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-6f49f58cd5-krmfp_default(1a477d00-3549-441e-aa3c-2b58066c0f8a)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-189000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-189000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.106.186.109
IPs:                      10.106.186.109
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30431/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-189000 -n functional-189000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-189000 ssh -- ls                                                                                          | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT | 19 Jul 24 11:24 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-189000 ssh cat                                                                                            | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT | 19 Jul 24 11:24 PDT |
	|           | /mount-9p/test-1721413445917824000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-189000 ssh stat                                                                                           | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT | 19 Jul 24 11:24 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-189000 ssh stat                                                                                           | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT | 19 Jul 24 11:24 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-189000 ssh sudo                                                                                           | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT | 19 Jul 24 11:24 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-189000 ssh findmnt                                                                                        | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-189000                                                                                                 | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3349167846/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-189000 ssh findmnt                                                                                        | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT | 19 Jul 24 11:24 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-189000 ssh -- ls                                                                                          | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT | 19 Jul 24 11:24 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-189000 ssh sudo                                                                                           | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-189000                                                                                                 | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4275643947/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-189000                                                                                                 | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4275643947/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-189000                                                                                                 | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4275643947/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-189000 ssh findmnt                                                                                        | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-189000 ssh findmnt                                                                                        | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT | 19 Jul 24 11:24 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-189000 ssh findmnt                                                                                        | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT | 19 Jul 24 11:24 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-189000 ssh findmnt                                                                                        | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT |                     |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-189000 ssh findmnt                                                                                        | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT | 19 Jul 24 11:24 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-189000 ssh findmnt                                                                                        | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT | 19 Jul 24 11:24 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-189000 ssh findmnt                                                                                        | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT | 19 Jul 24 11:24 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-189000                                                                                                 | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-189000                                                                                                 | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-189000 --dry-run                                                                                       | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-189000                                                                                                 | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-189000 | jenkins | v1.33.1 | 19 Jul 24 11:24 PDT |                     |
	|           | -p functional-189000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 11:24:12
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 11:24:12.815709    2343 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:24:12.815814    2343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:24:12.815817    2343 out.go:304] Setting ErrFile to fd 2...
	I0719 11:24:12.815820    2343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:24:12.815946    2343 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:24:12.817473    2343 out.go:298] Setting JSON to false
	I0719 11:24:12.836149    2343 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1415,"bootTime":1721412037,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 11:24:12.836245    2343 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:24:12.841003    2343 out.go:177] * [functional-189000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0719 11:24:12.849070    2343 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 11:24:12.849117    2343 notify.go:220] Checking for updates...
	I0719 11:24:12.856007    2343 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:24:12.863006    2343 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 11:24:12.866055    2343 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:24:12.869015    2343 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 11:24:12.872026    2343 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 11:24:12.875788    2343 config.go:182] Loaded profile config "functional-189000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:24:12.876050    2343 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:24:12.880074    2343 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0719 11:24:12.887004    2343 start.go:297] selected driver: qemu2
	I0719 11:24:12.887012    2343 start.go:901] validating driver "qemu2" against &{Name:functional-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:24:12.887057    2343 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 11:24:12.892867    2343 out.go:177] 
	W0719 11:24:12.897054    2343 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0719 11:24:12.901041    2343 out.go:177] 
	
	
	==> Docker <==
	Jul 19 18:23:58 functional-189000 dockerd[6204]: time="2024-07-19T18:23:58.534207641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 18:23:58 functional-189000 cri-dockerd[6539]: time="2024-07-19T18:23:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c21f05fc9c6e78f6d9088212100c29815d075649c2ba295ca85b0fadd48e5847/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 19 18:23:59 functional-189000 cri-dockerd[6539]: time="2024-07-19T18:23:59Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Jul 19 18:23:59 functional-189000 dockerd[6204]: time="2024-07-19T18:23:59.333775878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 18:23:59 functional-189000 dockerd[6204]: time="2024-07-19T18:23:59.333803577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 18:23:59 functional-189000 dockerd[6204]: time="2024-07-19T18:23:59.333808950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 18:23:59 functional-189000 dockerd[6204]: time="2024-07-19T18:23:59.333926830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 18:24:06 functional-189000 dockerd[6204]: time="2024-07-19T18:24:06.792982307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 18:24:06 functional-189000 dockerd[6204]: time="2024-07-19T18:24:06.793156751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 18:24:06 functional-189000 dockerd[6204]: time="2024-07-19T18:24:06.793173871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 18:24:06 functional-189000 dockerd[6204]: time="2024-07-19T18:24:06.793376181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 18:24:06 functional-189000 cri-dockerd[6539]: time="2024-07-19T18:24:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/381f6ee47e38f9e131a92446d9c7420bf622242541c8ab461ad13ed6f975543a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 19 18:24:07 functional-189000 cri-dockerd[6539]: time="2024-07-19T18:24:07Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Jul 19 18:24:07 functional-189000 dockerd[6204]: time="2024-07-19T18:24:07.931523964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 18:24:07 functional-189000 dockerd[6204]: time="2024-07-19T18:24:07.931745103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 18:24:07 functional-189000 dockerd[6204]: time="2024-07-19T18:24:07.931932336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 18:24:07 functional-189000 dockerd[6204]: time="2024-07-19T18:24:07.931988318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 18:24:07 functional-189000 dockerd[6197]: time="2024-07-19T18:24:07.964509115Z" level=info msg="ignoring event" container=c87bffef5e6a1e608a9b0b96ec9c033590ab5a8c85285a6d63cabec1c6d13371 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 18:24:07 functional-189000 dockerd[6204]: time="2024-07-19T18:24:07.964725547Z" level=info msg="shim disconnected" id=c87bffef5e6a1e608a9b0b96ec9c033590ab5a8c85285a6d63cabec1c6d13371 namespace=moby
	Jul 19 18:24:07 functional-189000 dockerd[6204]: time="2024-07-19T18:24:07.964758828Z" level=warning msg="cleaning up after shim disconnected" id=c87bffef5e6a1e608a9b0b96ec9c033590ab5a8c85285a6d63cabec1c6d13371 namespace=moby
	Jul 19 18:24:07 functional-189000 dockerd[6204]: time="2024-07-19T18:24:07.964763785Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 18:24:09 functional-189000 dockerd[6197]: time="2024-07-19T18:24:09.256505816Z" level=info msg="ignoring event" container=381f6ee47e38f9e131a92446d9c7420bf622242541c8ab461ad13ed6f975543a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 18:24:09 functional-189000 dockerd[6204]: time="2024-07-19T18:24:09.256577044Z" level=info msg="shim disconnected" id=381f6ee47e38f9e131a92446d9c7420bf622242541c8ab461ad13ed6f975543a namespace=moby
	Jul 19 18:24:09 functional-189000 dockerd[6204]: time="2024-07-19T18:24:09.256710752Z" level=warning msg="cleaning up after shim disconnected" id=381f6ee47e38f9e131a92446d9c7420bf622242541c8ab461ad13ed6f975543a namespace=moby
	Jul 19 18:24:09 functional-189000 dockerd[6204]: time="2024-07-19T18:24:09.256720957Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c87bffef5e6a1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   6 seconds ago        Exited              mount-munger              0                   381f6ee47e38f       busybox-mount
	386dcf78b304f       nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df                         14 seconds ago       Running             myfrontend                0                   c21f05fc9c6e7       sp-pod
	300b9890130de       72565bf5bbedf                                                                                         18 seconds ago       Exited              echoserver-arm            2                   46563e0050170       hello-node-connect-6f49f58cd5-krmfp
	d2002b55d0f60       72565bf5bbedf                                                                                         24 seconds ago       Exited              echoserver-arm            2                   d6f046e43fb7e       hello-node-65f5d5cc78-5rxbg
	b93060774b824       nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                         38 seconds ago       Running             nginx                     0                   ad3c66e1b778f       nginx-svc
	f4a6f2fabf21c       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   45a8fbf7b2d19       coredns-7db6d8ff4d-h9vqf
	19047078e3a3a       2351f570ed0ea                                                                                         About a minute ago   Running             kube-proxy                2                   2d84800d06702       kube-proxy-8kjt4
	1efa0519adccc       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       3                   20a4f218e4542       storage-provisioner
	e9c2c305efe1f       8e97cdb19e7cc                                                                                         About a minute ago   Running             kube-controller-manager   2                   4826276adf6d2       kube-controller-manager-functional-189000
	ae6d3daa89a16       d48f992a22722                                                                                         About a minute ago   Running             kube-scheduler            2                   6b96b25e9fbc4       kube-scheduler-functional-189000
	0018d0fe2c92d       014faa467e297                                                                                         About a minute ago   Running             etcd                      2                   fd24c3cdc230b       etcd-functional-189000
	07ca9719226d1       61773190d42ff                                                                                         About a minute ago   Running             kube-apiserver            0                   37e61bda0b6fa       kube-apiserver-functional-189000
	614c998ad4a64       ba04bb24b9575                                                                                         2 minutes ago        Exited              storage-provisioner       2                   440e1707ef8d2       storage-provisioner
	c7a4d551c6763       2437cf7621777                                                                                         2 minutes ago        Exited              coredns                   1                   3cd0a558614c7       coredns-7db6d8ff4d-h9vqf
	94cce84505e4f       2351f570ed0ea                                                                                         2 minutes ago        Exited              kube-proxy                1                   ca6c8b568e290       kube-proxy-8kjt4
	30c88d21bdf63       8e97cdb19e7cc                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   26a5332bc432a       kube-controller-manager-functional-189000
	040e4f12333c3       014faa467e297                                                                                         2 minutes ago        Exited              etcd                      1                   639dbff0fab27       etcd-functional-189000
	ccc05fb276083       d48f992a22722                                                                                         2 minutes ago        Exited              kube-scheduler            1                   84cd86e2a2e15       kube-scheduler-functional-189000
	
	
	==> coredns [c7a4d551c676] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57529 - 36867 "HINFO IN 6741364184922061187.3410461640948671306. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004958491s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1048772813]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 18:21:51.580) (total time: 30000ms):
	Trace[1048772813]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:22:21.580)
	Trace[1048772813]: [30.000481607s] [30.000481607s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[293055260]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 18:21:51.579) (total time: 30000ms):
	Trace[293055260]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:22:21.580)
	Trace[293055260]: [30.000909031s] [30.000909031s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[840400063]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 18:21:51.580) (total time: 30000ms):
	Trace[840400063]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:22:21.580)
	Trace[840400063]: [30.000651189s] [30.000651189s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f4a6f2fabf21] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50299 - 20103 "HINFO IN 7118852042665255302.2192807165445511617. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009532826s
	[INFO] 10.244.0.1:61105 - 63306 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000106131s
	[INFO] 10.244.0.1:24234 - 42353 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000100882s
	[INFO] 10.244.0.1:45075 - 336 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000060272s
	[INFO] 10.244.0.1:51722 - 12423 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001605295s
	[INFO] 10.244.0.1:56120 - 56256 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000058814s
	[INFO] 10.244.0.1:29290 - 42896 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000074641s
	
	
	==> describe nodes <==
	Name:               functional-189000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-189000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=functional-189000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T11_21_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 18:21:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-189000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 18:24:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 18:24:02 +0000   Fri, 19 Jul 2024 18:21:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 18:24:02 +0000   Fri, 19 Jul 2024 18:21:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 18:24:02 +0000   Fri, 19 Jul 2024 18:21:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 18:24:02 +0000   Fri, 19 Jul 2024 18:21:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-189000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 523bbe739dc04352b66b2b2b86f51c01
	  System UUID:                523bbe739dc04352b66b2b2b86f51c01
	  Boot ID:                    7cba37e8-8bfd-4bd4-a0f7-429531579db5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-65f5d5cc78-5rxbg                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  default                     hello-node-connect-6f49f58cd5-krmfp          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15s
	  kube-system                 coredns-7db6d8ff4d-h9vqf                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m46s
	  kube-system                 etcd-functional-189000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m
	  kube-system                 kube-apiserver-functional-189000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-controller-manager-functional-189000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	  kube-system                 kube-proxy-8kjt4                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 kube-scheduler-functional-189000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m1s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m45s                  kube-proxy       
	  Normal  Starting                 71s                    kube-proxy       
	  Normal  Starting                 2m21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m4s (x8 over 3m4s)    kubelet          Node functional-189000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m4s (x8 over 3m4s)    kubelet          Node functional-189000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s (x7 over 3m4s)    kubelet          Node functional-189000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m4s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m                     kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m                     kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m                     kubelet          Node functional-189000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m                     kubelet          Node functional-189000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m                     kubelet          Node functional-189000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m56s                  kubelet          Node functional-189000 status is now: NodeReady
	  Normal  RegisteredNode           2m47s                  node-controller  Node functional-189000 event: Registered Node functional-189000 in Controller
	  Normal  Starting                 2m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m26s (x8 over 2m26s)  kubelet          Node functional-189000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m26s (x8 over 2m26s)  kubelet          Node functional-189000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m26s (x7 over 2m26s)  kubelet          Node functional-189000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m11s                  node-controller  Node functional-189000 event: Registered Node functional-189000 in Controller
	  Normal  NodeHasNoDiskPressure    76s (x8 over 76s)      kubelet          Node functional-189000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  76s (x8 over 76s)      kubelet          Node functional-189000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 76s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     76s (x7 over 76s)      kubelet          Node functional-189000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  76s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           61s                    node-controller  Node functional-189000 event: Registered Node functional-189000 in Controller
	
	
	==> dmesg <==
	[Jul19 18:22] kauditd_printk_skb: 32 callbacks suppressed
	[ +29.991114] systemd-fstab-generator[5299]: Ignoring "noauto" option for root device
	[ +10.435569] systemd-fstab-generator[5723]: Ignoring "noauto" option for root device
	[  +0.055253] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.122841] systemd-fstab-generator[5757]: Ignoring "noauto" option for root device
	[  +0.080842] systemd-fstab-generator[5769]: Ignoring "noauto" option for root device
	[  +0.120406] systemd-fstab-generator[5783]: Ignoring "noauto" option for root device
	[  +5.104611] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.374045] systemd-fstab-generator[6419]: Ignoring "noauto" option for root device
	[  +0.084658] systemd-fstab-generator[6431]: Ignoring "noauto" option for root device
	[  +0.079222] systemd-fstab-generator[6443]: Ignoring "noauto" option for root device
	[  +0.092359] systemd-fstab-generator[6524]: Ignoring "noauto" option for root device
	[  +0.221498] systemd-fstab-generator[6688]: Ignoring "noauto" option for root device
	[  +1.013602] systemd-fstab-generator[6812]: Ignoring "noauto" option for root device
	[Jul19 18:23] kauditd_printk_skb: 199 callbacks suppressed
	[ +11.063780] kauditd_printk_skb: 31 callbacks suppressed
	[  +4.506076] systemd-fstab-generator[7824]: Ignoring "noauto" option for root device
	[  +4.586526] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.127563] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.166888] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.271417] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.353205] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.164180] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.527969] kauditd_printk_skb: 1 callbacks suppressed
	[Jul19 18:24] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [0018d0fe2c92] <==
	{"level":"info","ts":"2024-07-19T18:22:58.418097Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T18:22:58.41812Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T18:22:58.418236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-07-19T18:22:58.418372Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-07-19T18:22:58.418415Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T18:22:58.418465Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T18:22:58.420558Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T18:22:58.423893Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T18:22:58.423937Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T18:22:58.424015Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-19T18:22:58.424034Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-19T18:23:00.101904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-19T18:23:00.10205Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-19T18:23:00.102132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-19T18:23:00.10218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-07-19T18:23:00.102197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-19T18:23:00.10223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-07-19T18:23:00.102253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-19T18:23:00.107458Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T18:23:00.107467Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-189000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T18:23:00.108082Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T18:23:00.108686Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T18:23:00.10877Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T18:23:00.112278Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T18:23:00.112278Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> etcd [040e4f12333c] <==
	{"level":"info","ts":"2024-07-19T18:21:47.753211Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T18:21:49.648703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-19T18:21:49.648849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-19T18:21:49.648966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-07-19T18:21:49.649008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-07-19T18:21:49.649052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-19T18:21:49.649359Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-07-19T18:21:49.649487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-19T18:21:49.652311Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-189000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T18:21:49.652394Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T18:21:49.653131Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T18:21:49.653228Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T18:21:49.653339Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T18:21:49.659597Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T18:21:49.661238Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-19T18:22:43.696984Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-19T18:22:43.697021Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-189000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-07-19T18:22:43.697063Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T18:22:43.697106Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T18:22:43.706571Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T18:22:43.706596Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-19T18:22:43.706635Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-07-19T18:22:43.708878Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-19T18:22:43.708913Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-19T18:22:43.708918Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-189000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 18:24:13 up 3 min,  0 users,  load average: 0.45, 0.28, 0.12
	Linux functional-189000 5.10.207 #1 SMP PREEMPT Thu Jul 18 19:24:21 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [07ca9719226d] <==
	I0719 18:23:00.714471       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0719 18:23:00.714871       1 shared_informer.go:320] Caches are synced for configmaps
	I0719 18:23:00.714927       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 18:23:00.715557       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 18:23:00.718279       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0719 18:23:00.719255       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0719 18:23:00.719581       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0719 18:23:00.719613       1 aggregator.go:165] initial CRD sync complete...
	I0719 18:23:00.719620       1 autoregister_controller.go:141] Starting autoregister controller
	I0719 18:23:00.719622       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 18:23:00.719642       1 cache.go:39] Caches are synced for autoregister controller
	I0719 18:23:01.615924       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 18:23:02.271809       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 18:23:02.277538       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 18:23:02.288235       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 18:23:02.295695       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 18:23:02.297611       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 18:23:12.812527       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 18:23:13.046766       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 18:23:22.138242       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.109.8.199"}
	I0719 18:23:27.219848       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0719 18:23:27.262443       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.25.69"}
	I0719 18:23:32.697128       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.108.111.130"}
	I0719 18:23:42.099864       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.186.109"}
	I0719 18:24:13.336908       1 controller.go:615] quota admission added evaluator for: namespaces
	
	
	==> kube-controller-manager [30c88d21bdf6] <==
	I0719 18:22:02.807874       1 shared_informer.go:320] Caches are synced for persistent volume
	I0719 18:22:02.808277       1 shared_informer.go:320] Caches are synced for PVC protection
	I0719 18:22:02.808962       1 shared_informer.go:320] Caches are synced for crt configmap
	I0719 18:22:02.810004       1 shared_informer.go:320] Caches are synced for job
	I0719 18:22:02.811073       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0719 18:22:02.813300       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0719 18:22:02.813326       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0719 18:22:02.813347       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0719 18:22:02.813305       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0719 18:22:02.814453       1 shared_informer.go:320] Caches are synced for TTL
	I0719 18:22:02.815608       1 shared_informer.go:320] Caches are synced for attach detach
	I0719 18:22:02.816723       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0719 18:22:02.904224       1 shared_informer.go:320] Caches are synced for taint
	I0719 18:22:02.904334       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0719 18:22:02.904371       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-189000"
	I0719 18:22:02.904399       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0719 18:22:02.943818       1 shared_informer.go:320] Caches are synced for daemon sets
	I0719 18:22:02.994105       1 shared_informer.go:320] Caches are synced for stateful set
	I0719 18:22:03.008823       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 18:22:03.048202       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 18:22:03.423158       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 18:22:03.493916       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 18:22:03.493930       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0719 18:22:32.194402       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="4.10205ms"
	I0719 18:22:32.194545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="31.193µs"
	
	
	==> kube-controller-manager [e9c2c305efe1] <==
	I0719 18:23:44.030567       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="27.241µs"
	I0719 18:23:50.072318       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="22.868µs"
	I0719 18:23:55.712118       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="21.659µs"
	I0719 18:23:56.103038       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="23.992µs"
	I0719 18:24:04.701624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="22.535µs"
	I0719 18:24:08.701625       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="26.575µs"
	I0719 18:24:13.406258       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="28.319311ms"
	E0719 18:24:13.406278       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0719 18:24:13.419260       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="12.51417ms"
	E0719 18:24:13.419278       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0719 18:24:13.419684       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="25.995372ms"
	E0719 18:24:13.419695       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0719 18:24:13.424294       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="5.004185ms"
	E0719 18:24:13.424397       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0719 18:24:13.424434       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="3.17659ms"
	E0719 18:24:13.424449       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0719 18:24:13.431481       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="4.047526ms"
	E0719 18:24:13.431527       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0719 18:24:13.432419       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="2.312819ms"
	E0719 18:24:13.432430       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0719 18:24:13.496523       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="40.654454ms"
	I0719 18:24:13.504706       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="28.877012ms"
	I0719 18:24:13.508600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="12.044442ms"
	I0719 18:24:13.508665       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="29.658µs"
	I0719 18:24:13.512613       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="7.403768ms"
	
	
	==> kube-proxy [19047078e3a3] <==
	I0719 18:23:02.206106       1 server_linux.go:69] "Using iptables proxy"
	I0719 18:23:02.211422       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0719 18:23:02.226443       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 18:23:02.226465       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 18:23:02.226474       1 server_linux.go:165] "Using iptables Proxier"
	I0719 18:23:02.227059       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 18:23:02.227126       1 server.go:872] "Version info" version="v1.30.3"
	I0719 18:23:02.227132       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 18:23:02.227699       1 config.go:192] "Starting service config controller"
	I0719 18:23:02.227705       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 18:23:02.227714       1 config.go:101] "Starting endpoint slice config controller"
	I0719 18:23:02.227716       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 18:23:02.228741       1 config.go:319] "Starting node config controller"
	I0719 18:23:02.228744       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 18:23:02.327774       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 18:23:02.327823       1 shared_informer.go:320] Caches are synced for service config
	I0719 18:23:02.328889       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [94cce84505e4] <==
	I0719 18:21:51.586642       1 server_linux.go:69] "Using iptables proxy"
	I0719 18:21:51.591571       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0719 18:21:51.610283       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 18:21:51.610304       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 18:21:51.610313       1 server_linux.go:165] "Using iptables Proxier"
	I0719 18:21:51.612328       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 18:21:51.612399       1 server.go:872] "Version info" version="v1.30.3"
	I0719 18:21:51.612404       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 18:21:51.613617       1 config.go:319] "Starting node config controller"
	I0719 18:21:51.613623       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 18:21:51.613626       1 config.go:192] "Starting service config controller"
	I0719 18:21:51.613630       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 18:21:51.613651       1 config.go:101] "Starting endpoint slice config controller"
	I0719 18:21:51.613654       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 18:21:51.714224       1 shared_informer.go:320] Caches are synced for node config
	I0719 18:21:51.714243       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 18:21:51.714253       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [ae6d3daa89a1] <==
	I0719 18:22:58.624929       1 serving.go:380] Generated self-signed cert in-memory
	I0719 18:23:00.678635       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0719 18:23:00.678716       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 18:23:00.680581       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 18:23:00.680596       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 18:23:00.680736       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 18:23:00.680587       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0719 18:23:00.680880       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0719 18:23:00.680603       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0719 18:23:00.680935       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0719 18:23:00.680609       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 18:23:00.780929       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0719 18:23:00.781027       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0719 18:23:00.780931       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ccc05fb27608] <==
	I0719 18:21:48.222876       1 serving.go:380] Generated self-signed cert in-memory
	W0719 18:21:50.191491       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 18:21:50.191589       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 18:21:50.191629       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 18:21:50.191646       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 18:21:50.242782       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0719 18:21:50.242874       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 18:21:50.243605       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 18:21:50.243676       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 18:21:50.243713       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 18:21:50.243736       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 18:21:50.344258       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 18:22:43.710258       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 19 18:23:58 functional-189000 kubelet[6819]: I0719 18:23:58.307933    6819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b1849891-808e-4edb-9bf3-dce3b156df96\" (UniqueName: \"kubernetes.io/host-path/87c5472f-35e7-48ee-bfc4-c408bb120a4c-pvc-b1849891-808e-4edb-9bf3-dce3b156df96\") pod \"sp-pod\" (UID: \"87c5472f-35e7-48ee-bfc4-c408bb120a4c\") " pod="default/sp-pod"
	Jul 19 18:23:59 functional-189000 kubelet[6819]: I0719 18:23:59.700762    6819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="370939c5-e947-43af-8926-af523a740154" path="/var/lib/kubelet/pods/370939c5-e947-43af-8926-af523a740154/volumes"
	Jul 19 18:24:00 functional-189000 kubelet[6819]: I0719 18:24:00.147803    6819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=1.446582097 podStartE2EDuration="2.147789588s" podCreationTimestamp="2024-07-19 18:23:58 +0000 UTC" firstStartedPulling="2024-07-19 18:23:58.587706548 +0000 UTC m=+60.962263291" lastFinishedPulling="2024-07-19 18:23:59.288914081 +0000 UTC m=+61.663470782" observedRunningTime="2024-07-19 18:24:00.147660671 +0000 UTC m=+62.522217372" watchObservedRunningTime="2024-07-19 18:24:00.147789588 +0000 UTC m=+62.522346331"
	Jul 19 18:24:04 functional-189000 kubelet[6819]: I0719 18:24:04.697585    6819 scope.go:117] "RemoveContainer" containerID="d2002b55d0f60f168fe8ab8ce675dea0332d2feb78b9aca101e57ceb8737f624"
	Jul 19 18:24:04 functional-189000 kubelet[6819]: E0719 18:24:04.697959    6819 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-5rxbg_default(6a747b20-9e56-4f73-ad82-1cff7ab89275)\"" pod="default/hello-node-65f5d5cc78-5rxbg" podUID="6a747b20-9e56-4f73-ad82-1cff7ab89275"
	Jul 19 18:24:06 functional-189000 kubelet[6819]: I0719 18:24:06.445712    6819 topology_manager.go:215] "Topology Admit Handler" podUID="0a5856e5-98f6-481e-aa1c-c875e571993b" podNamespace="default" podName="busybox-mount"
	Jul 19 18:24:06 functional-189000 kubelet[6819]: I0719 18:24:06.566175    6819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj5d5\" (UniqueName: \"kubernetes.io/projected/0a5856e5-98f6-481e-aa1c-c875e571993b-kube-api-access-pj5d5\") pod \"busybox-mount\" (UID: \"0a5856e5-98f6-481e-aa1c-c875e571993b\") " pod="default/busybox-mount"
	Jul 19 18:24:06 functional-189000 kubelet[6819]: I0719 18:24:06.566211    6819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/0a5856e5-98f6-481e-aa1c-c875e571993b-test-volume\") pod \"busybox-mount\" (UID: \"0a5856e5-98f6-481e-aa1c-c875e571993b\") " pod="default/busybox-mount"
	Jul 19 18:24:08 functional-189000 kubelet[6819]: I0719 18:24:08.697812    6819 scope.go:117] "RemoveContainer" containerID="300b9890130de7564ea03512d73556882290252dee65e3ed4b44ee51eaf8dc19"
	Jul 19 18:24:08 functional-189000 kubelet[6819]: E0719 18:24:08.698026    6819 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-krmfp_default(1a477d00-3549-441e-aa3c-2b58066c0f8a)\"" pod="default/hello-node-connect-6f49f58cd5-krmfp" podUID="1a477d00-3549-441e-aa3c-2b58066c0f8a"
	Jul 19 18:24:09 functional-189000 kubelet[6819]: I0719 18:24:09.384485    6819 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj5d5\" (UniqueName: \"kubernetes.io/projected/0a5856e5-98f6-481e-aa1c-c875e571993b-kube-api-access-pj5d5\") pod \"0a5856e5-98f6-481e-aa1c-c875e571993b\" (UID: \"0a5856e5-98f6-481e-aa1c-c875e571993b\") "
	Jul 19 18:24:09 functional-189000 kubelet[6819]: I0719 18:24:09.384504    6819 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/0a5856e5-98f6-481e-aa1c-c875e571993b-test-volume\") pod \"0a5856e5-98f6-481e-aa1c-c875e571993b\" (UID: \"0a5856e5-98f6-481e-aa1c-c875e571993b\") "
	Jul 19 18:24:09 functional-189000 kubelet[6819]: I0719 18:24:09.384529    6819 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a5856e5-98f6-481e-aa1c-c875e571993b-test-volume" (OuterVolumeSpecName: "test-volume") pod "0a5856e5-98f6-481e-aa1c-c875e571993b" (UID: "0a5856e5-98f6-481e-aa1c-c875e571993b"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 19 18:24:09 functional-189000 kubelet[6819]: I0719 18:24:09.385259    6819 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a5856e5-98f6-481e-aa1c-c875e571993b-kube-api-access-pj5d5" (OuterVolumeSpecName: "kube-api-access-pj5d5") pod "0a5856e5-98f6-481e-aa1c-c875e571993b" (UID: "0a5856e5-98f6-481e-aa1c-c875e571993b"). InnerVolumeSpecName "kube-api-access-pj5d5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 19 18:24:09 functional-189000 kubelet[6819]: I0719 18:24:09.484798    6819 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-pj5d5\" (UniqueName: \"kubernetes.io/projected/0a5856e5-98f6-481e-aa1c-c875e571993b-kube-api-access-pj5d5\") on node \"functional-189000\" DevicePath \"\""
	Jul 19 18:24:09 functional-189000 kubelet[6819]: I0719 18:24:09.484813    6819 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/0a5856e5-98f6-481e-aa1c-c875e571993b-test-volume\") on node \"functional-189000\" DevicePath \"\""
	Jul 19 18:24:10 functional-189000 kubelet[6819]: I0719 18:24:10.193266    6819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="381f6ee47e38f9e131a92446d9c7420bf622242541c8ab461ad13ed6f975543a"
	Jul 19 18:24:13 functional-189000 kubelet[6819]: I0719 18:24:13.490578    6819 topology_manager.go:215] "Topology Admit Handler" podUID="7d7532f7-bb2f-4a38-8ff8-9c570a99f52a" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-779776cb65-94zxl"
	Jul 19 18:24:13 functional-189000 kubelet[6819]: E0719 18:24:13.490614    6819 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0a5856e5-98f6-481e-aa1c-c875e571993b" containerName="mount-munger"
	Jul 19 18:24:13 functional-189000 kubelet[6819]: I0719 18:24:13.490641    6819 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5856e5-98f6-481e-aa1c-c875e571993b" containerName="mount-munger"
	Jul 19 18:24:13 functional-189000 kubelet[6819]: I0719 18:24:13.500116    6819 topology_manager.go:215] "Topology Admit Handler" podUID="f9e92458-fba0-44ee-8c3b-f44fde711df4" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-b5fc48f67-hn9sg"
	Jul 19 18:24:13 functional-189000 kubelet[6819]: I0719 18:24:13.610813    6819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9q8f\" (UniqueName: \"kubernetes.io/projected/7d7532f7-bb2f-4a38-8ff8-9c570a99f52a-kube-api-access-t9q8f\") pod \"kubernetes-dashboard-779776cb65-94zxl\" (UID: \"7d7532f7-bb2f-4a38-8ff8-9c570a99f52a\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-94zxl"
	Jul 19 18:24:13 functional-189000 kubelet[6819]: I0719 18:24:13.610934    6819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxl67\" (UniqueName: \"kubernetes.io/projected/f9e92458-fba0-44ee-8c3b-f44fde711df4-kube-api-access-lxl67\") pod \"dashboard-metrics-scraper-b5fc48f67-hn9sg\" (UID: \"f9e92458-fba0-44ee-8c3b-f44fde711df4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-hn9sg"
	Jul 19 18:24:13 functional-189000 kubelet[6819]: I0719 18:24:13.610947    6819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7d7532f7-bb2f-4a38-8ff8-9c570a99f52a-tmp-volume\") pod \"kubernetes-dashboard-779776cb65-94zxl\" (UID: \"7d7532f7-bb2f-4a38-8ff8-9c570a99f52a\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-94zxl"
	Jul 19 18:24:13 functional-189000 kubelet[6819]: I0719 18:24:13.610978    6819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f9e92458-fba0-44ee-8c3b-f44fde711df4-tmp-volume\") pod \"dashboard-metrics-scraper-b5fc48f67-hn9sg\" (UID: \"f9e92458-fba0-44ee-8c3b-f44fde711df4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-hn9sg"
	
	
	==> storage-provisioner [1efa0519adcc] <==
	I0719 18:23:02.132651       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 18:23:02.137822       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 18:23:02.137884       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 18:23:19.530494       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 18:23:19.530578       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-189000_255a1d60-1028-4c1b-9d16-8c5d25bcc144!
	I0719 18:23:19.530963       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c025a6b5-94ad-44fa-9523-12c6ecca3ce1", APIVersion:"v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-189000_255a1d60-1028-4c1b-9d16-8c5d25bcc144 became leader
	I0719 18:23:19.631590       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-189000_255a1d60-1028-4c1b-9d16-8c5d25bcc144!
	I0719 18:23:43.808632       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0719 18:23:43.808663       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    ccf350b3-c05e-4120-9d4f-d3b974805996 388 0 2024-07-19 18:21:28 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-07-19 18:21:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-b1849891-808e-4edb-9bf3-dce3b156df96 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  b1849891-808e-4edb-9bf3-dce3b156df96 782 0 2024-07-19 18:23:43 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-07-19 18:23:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-07-19 18:23:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0719 18:23:43.809194       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-b1849891-808e-4edb-9bf3-dce3b156df96" provisioned
	I0719 18:23:43.809245       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0719 18:23:43.809265       1 volume_store.go:212] Trying to save persistentvolume "pvc-b1849891-808e-4edb-9bf3-dce3b156df96"
	I0719 18:23:43.809695       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"b1849891-808e-4edb-9bf3-dce3b156df96", APIVersion:"v1", ResourceVersion:"782", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0719 18:23:43.816224       1 volume_store.go:219] persistentvolume "pvc-b1849891-808e-4edb-9bf3-dce3b156df96" saved
	I0719 18:23:43.816342       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"b1849891-808e-4edb-9bf3-dce3b156df96", APIVersion:"v1", ResourceVersion:"782", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-b1849891-808e-4edb-9bf3-dce3b156df96
	
	
	==> storage-provisioner [614c998ad4a6] <==
	I0719 18:22:04.124849       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 18:22:04.129282       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 18:22:04.129299       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 18:22:21.512339       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 18:22:21.512400       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-189000_320baa41-acc8-4b91-b5c0-901e1ff16d70!
	I0719 18:22:21.512749       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c025a6b5-94ad-44fa-9523-12c6ecca3ce1", APIVersion:"v1", ResourceVersion:"541", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-189000_320baa41-acc8-4b91-b5c0-901e1ff16d70 became leader
	I0719 18:22:21.612961       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-189000_320baa41-acc8-4b91-b5c0-901e1ff16d70!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-189000 -n functional-189000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-189000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-b5fc48f67-hn9sg kubernetes-dashboard-779776cb65-94zxl
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-189000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-hn9sg kubernetes-dashboard-779776cb65-94zxl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-189000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-hn9sg kubernetes-dashboard-779776cb65-94zxl: exit status 1 (45.301792ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-189000/192.168.105.4
	Start Time:       Fri, 19 Jul 2024 11:24:06 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://c87bffef5e6a1e608a9b0b96ec9c033590ab5a8c85285a6d63cabec1c6d13371
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 19 Jul 2024 11:24:07 -0700
	      Finished:     Fri, 19 Jul 2024 11:24:07 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pj5d5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-pj5d5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  7s    default-scheduler  Successfully assigned default/busybox-mount to functional-189000
	  Normal  Pulling    7s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.052s (1.052s including waiting). Image size: 3547125 bytes.
	  Normal  Created    6s    kubelet            Created container mount-munger
	  Normal  Started    6s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-b5fc48f67-hn9sg" not found
	Error from server (NotFound): pods "kubernetes-dashboard-779776cb65-94zxl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-189000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-hn9sg kubernetes-dashboard-779776cb65-94zxl: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (31.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-604000 node stop m02 -v=7 --alsologtostderr: (12.188473125s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 status -v=7 --alsologtostderr
E0719 11:31:11.155605    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
E0719 11:31:49.537485    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
E0719 11:33:27.290055    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
E0719 11:33:54.992837    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-604000 status -v=7 --alsologtostderr: exit status 7 (2m55.964266041s)

                                                
                                                
-- stdout --
	ha-604000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-604000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-604000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-604000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:30:59.277396    2967 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:30:59.277757    2967 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:30:59.277761    2967 out.go:304] Setting ErrFile to fd 2...
	I0719 11:30:59.277763    2967 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:30:59.277913    2967 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:30:59.278046    2967 out.go:298] Setting JSON to false
	I0719 11:30:59.278057    2967 mustload.go:65] Loading cluster: ha-604000
	I0719 11:30:59.278097    2967 notify.go:220] Checking for updates...
	I0719 11:30:59.278294    2967 config.go:182] Loaded profile config "ha-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:30:59.278300    2967 status.go:255] checking status of ha-604000 ...
	I0719 11:30:59.279008    2967 status.go:330] ha-604000 host status = "Running" (err=<nil>)
	I0719 11:30:59.279016    2967 host.go:66] Checking if "ha-604000" exists ...
	I0719 11:30:59.279114    2967 host.go:66] Checking if "ha-604000" exists ...
	I0719 11:30:59.279224    2967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 11:30:59.279231    2967 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000/id_rsa Username:docker}
	W0719 11:31:25.198494    2967 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0719 11:31:25.198637    2967 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0719 11:31:25.198672    2967 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0719 11:31:25.198698    2967 status.go:257] ha-604000 status: &{Name:ha-604000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 11:31:25.198723    2967 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0719 11:31:25.198737    2967 status.go:255] checking status of ha-604000-m02 ...
	I0719 11:31:25.198978    2967 status.go:330] ha-604000-m02 host status = "Stopped" (err=<nil>)
	I0719 11:31:25.198986    2967 status.go:343] host is not running, skipping remaining checks
	I0719 11:31:25.198989    2967 status.go:257] ha-604000-m02 status: &{Name:ha-604000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 11:31:25.198994    2967 status.go:255] checking status of ha-604000-m03 ...
	I0719 11:31:25.200028    2967 status.go:330] ha-604000-m03 host status = "Running" (err=<nil>)
	I0719 11:31:25.200040    2967 host.go:66] Checking if "ha-604000-m03" exists ...
	I0719 11:31:25.200377    2967 host.go:66] Checking if "ha-604000-m03" exists ...
	I0719 11:31:25.200563    2967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 11:31:25.200573    2967 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000-m03/id_rsa Username:docker}
	W0719 11:32:40.200006    2967 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0719 11:32:40.200053    2967 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0719 11:32:40.200064    2967 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0719 11:32:40.200068    2967 status.go:257] ha-604000-m03 status: &{Name:ha-604000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 11:32:40.200077    2967 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0719 11:32:40.200081    2967 status.go:255] checking status of ha-604000-m04 ...
	I0719 11:32:40.200845    2967 status.go:330] ha-604000-m04 host status = "Running" (err=<nil>)
	I0719 11:32:40.200852    2967 host.go:66] Checking if "ha-604000-m04" exists ...
	I0719 11:32:40.200962    2967 host.go:66] Checking if "ha-604000-m04" exists ...
	I0719 11:32:40.201088    2967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 11:32:40.201095    2967 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000-m04/id_rsa Username:docker}
	W0719 11:33:55.201998    2967 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0719 11:33:55.202046    2967 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0719 11:33:55.202054    2967 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0719 11:33:55.202058    2967 status.go:257] ha-604000-m04 status: &{Name:ha-604000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0719 11:33:55.202069    2967 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-604000 status -v=7 --alsologtostderr": ha-604000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-604000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-604000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-604000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-604000 status -v=7 --alsologtostderr": ha-604000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-604000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-604000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-604000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-604000 status -v=7 --alsologtostderr": ha-604000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-604000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-604000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-604000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-604000 -n ha-604000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-604000 -n ha-604000: exit status 3 (25.958197791s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 11:34:21.160296    3001 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0719 11:34:21.160302    3001 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-604000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (103.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m17.923089417s)
ha_test.go:413: expected profile "ha-604000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-604000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-604000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-604000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-604000 -n ha-604000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-604000 -n ha-604000: exit status 3 (25.961853s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 11:36:05.043110    3017 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0719 11:36:05.043136    3017 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-604000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (103.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (209.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-604000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.104204791s)

                                                
                                                
-- stdout --
	* Starting "ha-604000-m02" control-plane node in "ha-604000" cluster
	* Restarting existing qemu2 VM for "ha-604000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-604000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:36:05.097943    3028 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:36:05.098235    3028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:36:05.098239    3028 out.go:304] Setting ErrFile to fd 2...
	I0719 11:36:05.098242    3028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:36:05.098406    3028 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:36:05.098701    3028 mustload.go:65] Loading cluster: ha-604000
	I0719 11:36:05.098981    3028 config.go:182] Loaded profile config "ha-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0719 11:36:05.099251    3028 host.go:58] "ha-604000-m02" host status: Stopped
	I0719 11:36:05.102896    3028 out.go:177] * Starting "ha-604000-m02" control-plane node in "ha-604000" cluster
	I0719 11:36:05.105765    3028 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 11:36:05.105781    3028 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 11:36:05.105792    3028 cache.go:56] Caching tarball of preloaded images
	I0719 11:36:05.105954    3028 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 11:36:05.105968    3028 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 11:36:05.106047    3028 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/ha-604000/config.json ...
	I0719 11:36:05.106468    3028 start.go:360] acquireMachinesLock for ha-604000-m02: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:36:05.106517    3028 start.go:364] duration metric: took 35.625µs to acquireMachinesLock for "ha-604000-m02"
	I0719 11:36:05.106527    3028 start.go:96] Skipping create...Using existing machine configuration
	I0719 11:36:05.106531    3028 fix.go:54] fixHost starting: m02
	I0719 11:36:05.106701    3028 fix.go:112] recreateIfNeeded on ha-604000-m02: state=Stopped err=<nil>
	W0719 11:36:05.106707    3028 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 11:36:05.110699    3028 out.go:177] * Restarting existing qemu2 VM for "ha-604000-m02" ...
	I0719 11:36:05.114763    3028 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:36:05.114836    3028 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:4f:b8:f9:a4:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000-m02/disk.qcow2
	I0719 11:36:05.117865    3028 main.go:141] libmachine: STDOUT: 
	I0719 11:36:05.117886    3028 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:36:05.117914    3028 fix.go:56] duration metric: took 11.384ms for fixHost
	I0719 11:36:05.117918    3028 start.go:83] releasing machines lock for "ha-604000-m02", held for 11.396542ms
	W0719 11:36:05.117926    3028 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 11:36:05.117969    3028 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:36:05.117974    3028 start.go:729] Will try again in 5 seconds ...
	I0719 11:36:10.119994    3028 start.go:360] acquireMachinesLock for ha-604000-m02: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:36:10.120132    3028 start.go:364] duration metric: took 112.167µs to acquireMachinesLock for "ha-604000-m02"
	I0719 11:36:10.120165    3028 start.go:96] Skipping create...Using existing machine configuration
	I0719 11:36:10.120169    3028 fix.go:54] fixHost starting: m02
	I0719 11:36:10.120385    3028 fix.go:112] recreateIfNeeded on ha-604000-m02: state=Stopped err=<nil>
	W0719 11:36:10.120393    3028 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 11:36:10.124035    3028 out.go:177] * Restarting existing qemu2 VM for "ha-604000-m02" ...
	I0719 11:36:10.128122    3028 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:36:10.128175    3028 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:4f:b8:f9:a4:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000-m02/disk.qcow2
	I0719 11:36:10.130519    3028 main.go:141] libmachine: STDOUT: 
	I0719 11:36:10.130541    3028 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:36:10.130561    3028 fix.go:56] duration metric: took 10.392167ms for fixHost
	I0719 11:36:10.130565    3028 start.go:83] releasing machines lock for "ha-604000-m02", held for 10.428041ms
	W0719 11:36:10.130603    3028 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-604000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-604000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:36:10.135158    3028 out.go:177] 
	W0719 11:36:10.139167    3028 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 11:36:10.139171    3028 out.go:239] * 
	* 
	W0719 11:36:10.140734    3028 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 11:36:10.145204    3028 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0719 11:36:05.097943    3028 out.go:291] Setting OutFile to fd 1 ...
I0719 11:36:05.098235    3028 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:36:05.098239    3028 out.go:304] Setting ErrFile to fd 2...
I0719 11:36:05.098242    3028 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:36:05.098406    3028 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
I0719 11:36:05.098701    3028 mustload.go:65] Loading cluster: ha-604000
I0719 11:36:05.098981    3028 config.go:182] Loaded profile config "ha-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
W0719 11:36:05.099251    3028 host.go:58] "ha-604000-m02" host status: Stopped
I0719 11:36:05.102896    3028 out.go:177] * Starting "ha-604000-m02" control-plane node in "ha-604000" cluster
I0719 11:36:05.105765    3028 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0719 11:36:05.105781    3028 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0719 11:36:05.105792    3028 cache.go:56] Caching tarball of preloaded images
I0719 11:36:05.105954    3028 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0719 11:36:05.105968    3028 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0719 11:36:05.106047    3028 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/ha-604000/config.json ...
I0719 11:36:05.106468    3028 start.go:360] acquireMachinesLock for ha-604000-m02: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0719 11:36:05.106517    3028 start.go:364] duration metric: took 35.625µs to acquireMachinesLock for "ha-604000-m02"
I0719 11:36:05.106527    3028 start.go:96] Skipping create...Using existing machine configuration
I0719 11:36:05.106531    3028 fix.go:54] fixHost starting: m02
I0719 11:36:05.106701    3028 fix.go:112] recreateIfNeeded on ha-604000-m02: state=Stopped err=<nil>
W0719 11:36:05.106707    3028 fix.go:138] unexpected machine state, will restart: <nil>
I0719 11:36:05.110699    3028 out.go:177] * Restarting existing qemu2 VM for "ha-604000-m02" ...
I0719 11:36:05.114763    3028 qemu.go:418] Using hvf for hardware acceleration
I0719 11:36:05.114836    3028 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:4f:b8:f9:a4:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000-m02/disk.qcow2
I0719 11:36:05.117865    3028 main.go:141] libmachine: STDOUT: 
I0719 11:36:05.117886    3028 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0719 11:36:05.117914    3028 fix.go:56] duration metric: took 11.384ms for fixHost
I0719 11:36:05.117918    3028 start.go:83] releasing machines lock for "ha-604000-m02", held for 11.396542ms
W0719 11:36:05.117926    3028 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0719 11:36:05.117969    3028 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0719 11:36:05.117974    3028 start.go:729] Will try again in 5 seconds ...
I0719 11:36:10.119994    3028 start.go:360] acquireMachinesLock for ha-604000-m02: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0719 11:36:10.120132    3028 start.go:364] duration metric: took 112.167µs to acquireMachinesLock for "ha-604000-m02"
I0719 11:36:10.120165    3028 start.go:96] Skipping create...Using existing machine configuration
I0719 11:36:10.120169    3028 fix.go:54] fixHost starting: m02
I0719 11:36:10.120385    3028 fix.go:112] recreateIfNeeded on ha-604000-m02: state=Stopped err=<nil>
W0719 11:36:10.120393    3028 fix.go:138] unexpected machine state, will restart: <nil>
I0719 11:36:10.124035    3028 out.go:177] * Restarting existing qemu2 VM for "ha-604000-m02" ...
I0719 11:36:10.128122    3028 qemu.go:418] Using hvf for hardware acceleration
I0719 11:36:10.128175    3028 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:4f:b8:f9:a4:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000-m02/disk.qcow2
I0719 11:36:10.130519    3028 main.go:141] libmachine: STDOUT: 
I0719 11:36:10.130541    3028 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0719 11:36:10.130561    3028 fix.go:56] duration metric: took 10.392167ms for fixHost
I0719 11:36:10.130565    3028 start.go:83] releasing machines lock for "ha-604000-m02", held for 10.428041ms
W0719 11:36:10.130603    3028 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-604000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-604000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0719 11:36:10.135158    3028 out.go:177] 
W0719 11:36:10.139167    3028 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0719 11:36:10.139171    3028 out.go:239] * 
* 
W0719 11:36:10.140734    3028 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0719 11:36:10.145204    3028 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-604000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 status -v=7 --alsologtostderr
E0719 11:36:49.532537    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
E0719 11:38:12.598961    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
E0719 11:38:27.285923    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-604000 status -v=7 --alsologtostderr: exit status 7 (2m57.958595167s)

                                                
                                                
-- stdout --
	ha-604000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-604000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-604000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-604000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:36:10.179639    3035 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:36:10.179780    3035 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:36:10.179783    3035 out.go:304] Setting ErrFile to fd 2...
	I0719 11:36:10.179786    3035 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:36:10.179913    3035 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:36:10.180046    3035 out.go:298] Setting JSON to false
	I0719 11:36:10.180061    3035 mustload.go:65] Loading cluster: ha-604000
	I0719 11:36:10.180102    3035 notify.go:220] Checking for updates...
	I0719 11:36:10.180280    3035 config.go:182] Loaded profile config "ha-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:36:10.180287    3035 status.go:255] checking status of ha-604000 ...
	I0719 11:36:10.180941    3035 status.go:330] ha-604000 host status = "Running" (err=<nil>)
	I0719 11:36:10.180950    3035 host.go:66] Checking if "ha-604000" exists ...
	I0719 11:36:10.181040    3035 host.go:66] Checking if "ha-604000" exists ...
	I0719 11:36:10.181143    3035 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 11:36:10.181149    3035 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000/id_rsa Username:docker}
	W0719 11:36:10.181326    3035 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0719 11:36:10.181341    3035 retry.go:31] will retry after 173.415256ms: dial tcp 192.168.105.5:22: connect: host is down
	W0719 11:36:10.357065    3035 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0719 11:36:10.357101    3035 retry.go:31] will retry after 313.061835ms: dial tcp 192.168.105.5:22: connect: host is down
	W0719 11:36:10.672376    3035 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0719 11:36:10.672414    3035 retry.go:31] will retry after 449.353572ms: dial tcp 192.168.105.5:22: connect: host is down
	W0719 11:36:11.123931    3035 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0719 11:36:11.123952    3035 retry.go:31] will retry after 509.598842ms: dial tcp 192.168.105.5:22: connect: host is down
	W0719 11:36:11.635789    3035 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0719 11:36:11.635874    3035 retry.go:31] will retry after 197.388267ms: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0719 11:36:11.834800    3035 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000/id_rsa Username:docker}
	W0719 11:36:11.835099    3035 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0719 11:36:11.835110    3035 retry.go:31] will retry after 339.917774ms: dial tcp 192.168.105.5:22: connect: host is down
	W0719 11:36:38.100884    3035 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0719 11:36:38.100947    3035 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0719 11:36:38.100956    3035 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0719 11:36:38.100959    3035 status.go:257] ha-604000 status: &{Name:ha-604000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 11:36:38.100974    3035 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0719 11:36:38.100978    3035 status.go:255] checking status of ha-604000-m02 ...
	I0719 11:36:38.101175    3035 status.go:330] ha-604000-m02 host status = "Stopped" (err=<nil>)
	I0719 11:36:38.101182    3035 status.go:343] host is not running, skipping remaining checks
	I0719 11:36:38.101185    3035 status.go:257] ha-604000-m02 status: &{Name:ha-604000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 11:36:38.101189    3035 status.go:255] checking status of ha-604000-m03 ...
	I0719 11:36:38.101824    3035 status.go:330] ha-604000-m03 host status = "Running" (err=<nil>)
	I0719 11:36:38.101830    3035 host.go:66] Checking if "ha-604000-m03" exists ...
	I0719 11:36:38.101923    3035 host.go:66] Checking if "ha-604000-m03" exists ...
	I0719 11:36:38.102044    3035 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 11:36:38.102049    3035 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000-m03/id_rsa Username:docker}
	W0719 11:37:53.102989    3035 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0719 11:37:53.103049    3035 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0719 11:37:53.103056    3035 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0719 11:37:53.103060    3035 status.go:257] ha-604000-m03 status: &{Name:ha-604000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 11:37:53.103069    3035 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0719 11:37:53.103074    3035 status.go:255] checking status of ha-604000-m04 ...
	I0719 11:37:53.103770    3035 status.go:330] ha-604000-m04 host status = "Running" (err=<nil>)
	I0719 11:37:53.103780    3035 host.go:66] Checking if "ha-604000-m04" exists ...
	I0719 11:37:53.103896    3035 host.go:66] Checking if "ha-604000-m04" exists ...
	I0719 11:37:53.104041    3035 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 11:37:53.104047    3035 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000-m04/id_rsa Username:docker}
	W0719 11:39:08.105138    3035 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0719 11:39:08.105199    3035 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0719 11:39:08.105207    3035 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0719 11:39:08.105211    3035 status.go:257] ha-604000-m04 status: &{Name:ha-604000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0719 11:39:08.105220    3035 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-604000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-604000 -n ha-604000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-604000 -n ha-604000: exit status 3 (25.95440325s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 11:39:34.059646    3067 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0719 11:39:34.059653    3067 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-604000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (209.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (283.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-604000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-604000 -v=7 --alsologtostderr
E0719 11:41:49.529492    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
E0719 11:43:27.281936    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
E0719 11:44:50.346436    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-604000 -v=7 --alsologtostderr: (4m38.090533458s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-604000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-604000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.227047625s)

                                                
                                                
-- stdout --
	* [ha-604000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-604000" primary control-plane node in "ha-604000" cluster
	* Restarting existing qemu2 VM for "ha-604000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-604000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:45:30.374119    3179 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:45:30.374331    3179 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:45:30.374336    3179 out.go:304] Setting ErrFile to fd 2...
	I0719 11:45:30.374339    3179 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:45:30.374527    3179 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:45:30.375776    3179 out.go:298] Setting JSON to false
	I0719 11:45:30.396358    3179 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2693,"bootTime":1721412037,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 11:45:30.396425    3179 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:45:30.401990    3179 out.go:177] * [ha-604000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 11:45:30.409980    3179 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 11:45:30.410052    3179 notify.go:220] Checking for updates...
	I0719 11:45:30.417938    3179 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:45:30.421971    3179 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 11:45:30.424988    3179 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:45:30.428913    3179 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 11:45:30.431930    3179 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 11:45:30.435268    3179 config.go:182] Loaded profile config "ha-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:45:30.435328    3179 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:45:30.439921    3179 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 11:45:30.446997    3179 start.go:297] selected driver: qemu2
	I0719 11:45:30.447007    3179 start.go:901] validating driver "qemu2" against &{Name:ha-604000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-604000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:45:30.447092    3179 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 11:45:30.450097    3179 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 11:45:30.450125    3179 cni.go:84] Creating CNI manager for ""
	I0719 11:45:30.450130    3179 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0719 11:45:30.450182    3179 start.go:340] cluster config:
	{Name:ha-604000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-604000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:45:30.454957    3179 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:45:30.462930    3179 out.go:177] * Starting "ha-604000" primary control-plane node in "ha-604000" cluster
	I0719 11:45:30.465954    3179 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 11:45:30.465969    3179 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 11:45:30.465981    3179 cache.go:56] Caching tarball of preloaded images
	I0719 11:45:30.466063    3179 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 11:45:30.466069    3179 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 11:45:30.466138    3179 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/ha-604000/config.json ...
	I0719 11:45:30.466556    3179 start.go:360] acquireMachinesLock for ha-604000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:45:30.466591    3179 start.go:364] duration metric: took 28.708µs to acquireMachinesLock for "ha-604000"
	I0719 11:45:30.466599    3179 start.go:96] Skipping create...Using existing machine configuration
	I0719 11:45:30.466604    3179 fix.go:54] fixHost starting: 
	I0719 11:45:30.466722    3179 fix.go:112] recreateIfNeeded on ha-604000: state=Stopped err=<nil>
	W0719 11:45:30.466730    3179 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 11:45:30.470966    3179 out.go:177] * Restarting existing qemu2 VM for "ha-604000" ...
	I0719 11:45:30.478902    3179 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:45:30.478936    3179 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:1d:3e:72:df:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000/disk.qcow2
	I0719 11:45:30.481170    3179 main.go:141] libmachine: STDOUT: 
	I0719 11:45:30.481192    3179 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:45:30.481221    3179 fix.go:56] duration metric: took 14.616667ms for fixHost
	I0719 11:45:30.481226    3179 start.go:83] releasing machines lock for "ha-604000", held for 14.631208ms
	W0719 11:45:30.481233    3179 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 11:45:30.481266    3179 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:45:30.481271    3179 start.go:729] Will try again in 5 seconds ...
	I0719 11:45:35.483432    3179 start.go:360] acquireMachinesLock for ha-604000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:45:35.483791    3179 start.go:364] duration metric: took 280.166µs to acquireMachinesLock for "ha-604000"
	I0719 11:45:35.483919    3179 start.go:96] Skipping create...Using existing machine configuration
	I0719 11:45:35.483938    3179 fix.go:54] fixHost starting: 
	I0719 11:45:35.484618    3179 fix.go:112] recreateIfNeeded on ha-604000: state=Stopped err=<nil>
	W0719 11:45:35.484644    3179 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 11:45:35.489166    3179 out.go:177] * Restarting existing qemu2 VM for "ha-604000" ...
	I0719 11:45:35.497059    3179 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:45:35.497235    3179 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:1d:3e:72:df:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000/disk.qcow2
	I0719 11:45:35.506685    3179 main.go:141] libmachine: STDOUT: 
	I0719 11:45:35.506780    3179 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:45:35.506873    3179 fix.go:56] duration metric: took 22.934333ms for fixHost
	I0719 11:45:35.506893    3179 start.go:83] releasing machines lock for "ha-604000", held for 23.077667ms
	W0719 11:45:35.507125    3179 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-604000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-604000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:45:35.514062    3179 out.go:177] 
	W0719 11:45:35.518122    3179 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 11:45:35.518155    3179 out.go:239] * 
	* 
	W0719 11:45:35.520620    3179 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 11:45:35.532043    3179 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-604000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-604000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-604000 -n ha-604000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-604000 -n ha-604000: exit status 7 (33.516375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-604000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (283.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-604000 node delete m03 -v=7 --alsologtostderr: exit status 83 (38.982666ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-604000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-604000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:45:35.671620    3193 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:45:35.671855    3193 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:45:35.671858    3193 out.go:304] Setting ErrFile to fd 2...
	I0719 11:45:35.671861    3193 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:45:35.671990    3193 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:45:35.672231    3193 mustload.go:65] Loading cluster: ha-604000
	I0719 11:45:35.672449    3193 config.go:182] Loaded profile config "ha-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0719 11:45:35.672778    3193 out.go:239] ! The control-plane node ha-604000 host is not running (will try others): state=Stopped
	! The control-plane node ha-604000 host is not running (will try others): state=Stopped
	W0719 11:45:35.672893    3193 out.go:239] ! The control-plane node ha-604000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-604000-m02 host is not running (will try others): state=Stopped
	I0719 11:45:35.677716    3193 out.go:177] * The control-plane node ha-604000-m03 host is not running: state=Stopped
	I0719 11:45:35.680689    3193 out.go:177]   To start a cluster, run: "minikube start -p ha-604000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-604000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-604000 status -v=7 --alsologtostderr: exit status 7 (29.663208ms)

                                                
                                                
-- stdout --
	ha-604000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-604000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-604000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-604000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:45:35.710505    3195 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:45:35.710633    3195 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:45:35.710637    3195 out.go:304] Setting ErrFile to fd 2...
	I0719 11:45:35.710639    3195 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:45:35.710766    3195 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:45:35.710895    3195 out.go:298] Setting JSON to false
	I0719 11:45:35.710905    3195 mustload.go:65] Loading cluster: ha-604000
	I0719 11:45:35.710958    3195 notify.go:220] Checking for updates...
	I0719 11:45:35.711116    3195 config.go:182] Loaded profile config "ha-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:45:35.711122    3195 status.go:255] checking status of ha-604000 ...
	I0719 11:45:35.711339    3195 status.go:330] ha-604000 host status = "Stopped" (err=<nil>)
	I0719 11:45:35.711342    3195 status.go:343] host is not running, skipping remaining checks
	I0719 11:45:35.711344    3195 status.go:257] ha-604000 status: &{Name:ha-604000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 11:45:35.711353    3195 status.go:255] checking status of ha-604000-m02 ...
	I0719 11:45:35.711455    3195 status.go:330] ha-604000-m02 host status = "Stopped" (err=<nil>)
	I0719 11:45:35.711458    3195 status.go:343] host is not running, skipping remaining checks
	I0719 11:45:35.711460    3195 status.go:257] ha-604000-m02 status: &{Name:ha-604000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 11:45:35.711464    3195 status.go:255] checking status of ha-604000-m03 ...
	I0719 11:45:35.711553    3195 status.go:330] ha-604000-m03 host status = "Stopped" (err=<nil>)
	I0719 11:45:35.711555    3195 status.go:343] host is not running, skipping remaining checks
	I0719 11:45:35.711560    3195 status.go:257] ha-604000-m03 status: &{Name:ha-604000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 11:45:35.711564    3195 status.go:255] checking status of ha-604000-m04 ...
	I0719 11:45:35.711655    3195 status.go:330] ha-604000-m04 host status = "Stopped" (err=<nil>)
	I0719 11:45:35.711657    3195 status.go:343] host is not running, skipping remaining checks
	I0719 11:45:35.711659    3195 status.go:257] ha-604000-m04 status: &{Name:ha-604000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-604000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-604000 -n ha-604000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-604000 -n ha-604000: exit status 7 (29.138375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-604000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-604000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-604000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-604000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-604000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-604000 -n ha-604000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-604000 -n ha-604000: exit status 7 (47.199917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-604000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (251.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 stop -v=7 --alsologtostderr
E0719 11:46:49.525484    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
E0719 11:48:27.278137    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-604000 stop -v=7 --alsologtostderr: (4m11.062758542s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-604000 status -v=7 --alsologtostderr: exit status 7 (61.3365ms)

                                                
                                                
-- stdout --
	ha-604000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-604000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-604000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-604000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:49:47.875991    3260 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:49:47.876192    3260 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:49:47.876197    3260 out.go:304] Setting ErrFile to fd 2...
	I0719 11:49:47.876200    3260 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:49:47.876375    3260 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:49:47.876549    3260 out.go:298] Setting JSON to false
	I0719 11:49:47.876562    3260 mustload.go:65] Loading cluster: ha-604000
	I0719 11:49:47.876604    3260 notify.go:220] Checking for updates...
	I0719 11:49:47.876848    3260 config.go:182] Loaded profile config "ha-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:49:47.876856    3260 status.go:255] checking status of ha-604000 ...
	I0719 11:49:47.877142    3260 status.go:330] ha-604000 host status = "Stopped" (err=<nil>)
	I0719 11:49:47.877146    3260 status.go:343] host is not running, skipping remaining checks
	I0719 11:49:47.877149    3260 status.go:257] ha-604000 status: &{Name:ha-604000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 11:49:47.877163    3260 status.go:255] checking status of ha-604000-m02 ...
	I0719 11:49:47.877295    3260 status.go:330] ha-604000-m02 host status = "Stopped" (err=<nil>)
	I0719 11:49:47.877299    3260 status.go:343] host is not running, skipping remaining checks
	I0719 11:49:47.877302    3260 status.go:257] ha-604000-m02 status: &{Name:ha-604000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 11:49:47.877308    3260 status.go:255] checking status of ha-604000-m03 ...
	I0719 11:49:47.877432    3260 status.go:330] ha-604000-m03 host status = "Stopped" (err=<nil>)
	I0719 11:49:47.877436    3260 status.go:343] host is not running, skipping remaining checks
	I0719 11:49:47.877439    3260 status.go:257] ha-604000-m03 status: &{Name:ha-604000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 11:49:47.877444    3260 status.go:255] checking status of ha-604000-m04 ...
	I0719 11:49:47.877571    3260 status.go:330] ha-604000-m04 host status = "Stopped" (err=<nil>)
	I0719 11:49:47.877575    3260 status.go:343] host is not running, skipping remaining checks
	I0719 11:49:47.877577    3260 status.go:257] ha-604000-m04 status: &{Name:ha-604000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-604000 status -v=7 --alsologtostderr": ha-604000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-604000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-604000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-604000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-604000 status -v=7 --alsologtostderr": ha-604000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-604000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-604000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-604000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-604000 status -v=7 --alsologtostderr": ha-604000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-604000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-604000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-604000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-604000 -n ha-604000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-604000 -n ha-604000: exit status 7 (31.946625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-604000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (251.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-604000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-604000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.181765375s)

                                                
                                                
-- stdout --
	* [ha-604000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-604000" primary control-plane node in "ha-604000" cluster
	* Restarting existing qemu2 VM for "ha-604000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-604000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:49:47.938146    3264 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:49:47.938271    3264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:49:47.938274    3264 out.go:304] Setting ErrFile to fd 2...
	I0719 11:49:47.938277    3264 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:49:47.938422    3264 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:49:47.939412    3264 out.go:298] Setting JSON to false
	I0719 11:49:47.955490    3264 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2950,"bootTime":1721412037,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 11:49:47.955564    3264 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:49:47.960661    3264 out.go:177] * [ha-604000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 11:49:47.967575    3264 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 11:49:47.967631    3264 notify.go:220] Checking for updates...
	I0719 11:49:47.971447    3264 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:49:47.974565    3264 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 11:49:47.977547    3264 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:49:47.980470    3264 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 11:49:47.983508    3264 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 11:49:47.986809    3264 config.go:182] Loaded profile config "ha-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:49:47.987072    3264 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:49:47.991477    3264 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 11:49:47.998535    3264 start.go:297] selected driver: qemu2
	I0719 11:49:47.998543    3264 start.go:901] validating driver "qemu2" against &{Name:ha-604000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-604000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:49:47.998653    3264 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 11:49:48.000830    3264 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 11:49:48.000873    3264 cni.go:84] Creating CNI manager for ""
	I0719 11:49:48.000880    3264 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0719 11:49:48.000922    3264 start.go:340] cluster config:
	{Name:ha-604000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-604000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:49:48.004316    3264 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:49:48.012484    3264 out.go:177] * Starting "ha-604000" primary control-plane node in "ha-604000" cluster
	I0719 11:49:48.016494    3264 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 11:49:48.016511    3264 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 11:49:48.016523    3264 cache.go:56] Caching tarball of preloaded images
	I0719 11:49:48.016589    3264 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 11:49:48.016595    3264 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 11:49:48.016664    3264 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/ha-604000/config.json ...
	I0719 11:49:48.017053    3264 start.go:360] acquireMachinesLock for ha-604000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:49:48.017088    3264 start.go:364] duration metric: took 28.625µs to acquireMachinesLock for "ha-604000"
	I0719 11:49:48.017096    3264 start.go:96] Skipping create...Using existing machine configuration
	I0719 11:49:48.017101    3264 fix.go:54] fixHost starting: 
	I0719 11:49:48.017213    3264 fix.go:112] recreateIfNeeded on ha-604000: state=Stopped err=<nil>
	W0719 11:49:48.017220    3264 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 11:49:48.021521    3264 out.go:177] * Restarting existing qemu2 VM for "ha-604000" ...
	I0719 11:49:48.029447    3264 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:49:48.029487    3264 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:1d:3e:72:df:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000/disk.qcow2
	I0719 11:49:48.031422    3264 main.go:141] libmachine: STDOUT: 
	I0719 11:49:48.031440    3264 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:49:48.031468    3264 fix.go:56] duration metric: took 14.3665ms for fixHost
	I0719 11:49:48.031472    3264 start.go:83] releasing machines lock for "ha-604000", held for 14.380417ms
	W0719 11:49:48.031479    3264 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 11:49:48.031516    3264 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:49:48.031521    3264 start.go:729] Will try again in 5 seconds ...
	I0719 11:49:53.033669    3264 start.go:360] acquireMachinesLock for ha-604000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:49:53.034228    3264 start.go:364] duration metric: took 448.792µs to acquireMachinesLock for "ha-604000"
	I0719 11:49:53.034348    3264 start.go:96] Skipping create...Using existing machine configuration
	I0719 11:49:53.034365    3264 fix.go:54] fixHost starting: 
	I0719 11:49:53.035285    3264 fix.go:112] recreateIfNeeded on ha-604000: state=Stopped err=<nil>
	W0719 11:49:53.035319    3264 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 11:49:53.043775    3264 out.go:177] * Restarting existing qemu2 VM for "ha-604000" ...
	I0719 11:49:53.048757    3264 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:49:53.049006    3264 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:1d:3e:72:df:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/ha-604000/disk.qcow2
	I0719 11:49:53.059268    3264 main.go:141] libmachine: STDOUT: 
	I0719 11:49:53.059397    3264 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:49:53.059513    3264 fix.go:56] duration metric: took 25.14175ms for fixHost
	I0719 11:49:53.059531    3264 start.go:83] releasing machines lock for "ha-604000", held for 25.280917ms
	W0719 11:49:53.059773    3264 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-604000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-604000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:49:53.066773    3264 out.go:177] 
	W0719 11:49:53.070773    3264 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 11:49:53.070807    3264 out.go:239] * 
	* 
	W0719 11:49:53.073458    3264 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 11:49:53.084778    3264 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-604000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-604000 -n ha-604000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-604000 -n ha-604000: exit status 7 (69.984625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-604000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-604000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-604000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-604000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-604000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-604000 -n ha-604000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-604000 -n ha-604000: exit status 7 (29.047125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-604000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-604000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-604000 --control-plane -v=7 --alsologtostderr: exit status 83 (42.368417ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-604000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-604000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:49:53.270410    3279 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:49:53.270800    3279 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:49:53.270804    3279 out.go:304] Setting ErrFile to fd 2...
	I0719 11:49:53.270807    3279 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:49:53.270987    3279 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:49:53.271226    3279 mustload.go:65] Loading cluster: ha-604000
	I0719 11:49:53.271469    3279 config.go:182] Loaded profile config "ha-604000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0719 11:49:53.271779    3279 out.go:239] ! The control-plane node ha-604000 host is not running (will try others): state=Stopped
	! The control-plane node ha-604000 host is not running (will try others): state=Stopped
	W0719 11:49:53.271876    3279 out.go:239] ! The control-plane node ha-604000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-604000-m02 host is not running (will try others): state=Stopped
	I0719 11:49:53.276249    3279 out.go:177] * The control-plane node ha-604000-m03 host is not running: state=Stopped
	I0719 11:49:53.280314    3279 out.go:177]   To start a cluster, run: "minikube start -p ha-604000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-604000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-604000 -n ha-604000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-604000 -n ha-604000: exit status 7 (28.993875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-604000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.93s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-010000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-010000 --driver=qemu2 : exit status 80 (9.861564625s)

                                                
                                                
-- stdout --
	* [image-010000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-010000" primary control-plane node in "image-010000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-010000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-010000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-010000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-010000 -n image-010000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-010000 -n image-010000: exit status 7 (66.4665ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-010000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.88s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-773000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-773000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.876734709s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b5d2e237-5dad-4cdd-b162-1cdbaece9680","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-773000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8226a306-c934-489b-ac8c-6289db685655","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19307"}}
	{"specversion":"1.0","id":"b333c7ec-e141-40ca-829d-3564b58b8ee9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig"}}
	{"specversion":"1.0","id":"6b649387-7f0d-4934-b7e8-f9d9f8f32f30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"b588c9fb-9d8d-4853-a5cd-463e0b790ca3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5d9a26f2-a71e-4acb-ab92-1e055ef8582b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube"}}
	{"specversion":"1.0","id":"8c0277fe-7bc7-4bab-8e1d-4c0361a85eb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d7d62beb-e25f-4881-82ca-ea660a3cedf2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"845a01a0-66ba-4df7-9364-030b8c9f00f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"399c33e4-1586-44cc-bedb-ed1cdbf37a8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-773000\" primary control-plane node in \"json-output-773000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3ddf1743-0448-4c0a-8e30-8d01c762ba01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"a54c900b-908f-416c-a9c9-77cbc482e2fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-773000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"ee544d92-9cec-4120-9428-e10256b07dda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"77de90bc-8199-4cd9-aca5-332957bf9df7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"b4b46253-7de3-42e8-a309-4e0ffba1eb92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-773000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"1c537bf2-a26f-4e86-bdfe-78c74e479820","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"42abe627-921b-475e-89c8-559ecd1ab0f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-773000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.88s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.07s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-773000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-773000 --output=json --user=testUser: exit status 83 (72.18575ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e29de4c7-f244-405e-8fc1-2449b2a66d8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-773000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"95d6e745-1cd5-4474-873f-d62ae9e14fe2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-773000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-773000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.07s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-773000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-773000 --output=json --user=testUser: exit status 83 (43.050875ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-773000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-773000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-773000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-773000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.14s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-538000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-538000 --driver=qemu2 : exit status 80 (9.854050709s)

                                                
                                                
-- stdout --
	* [first-538000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-538000" primary control-plane node in "first-538000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-538000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-538000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-538000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-19 11:50:27.329044 -0700 PDT m=+2252.657875876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-539000 -n second-539000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-539000 -n second-539000: exit status 85 (79.48925ms)

                                                
                                                
-- stdout --
	* Profile "second-539000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-539000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-539000" host is not running, skipping log retrieval (state="* Profile \"second-539000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-539000\"")
helpers_test.go:175: Cleaning up "second-539000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-539000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-19 11:50:27.510587 -0700 PDT m=+2252.839420709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-538000 -n first-538000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-538000 -n first-538000: exit status 7 (29.787917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-538000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-538000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-538000
--- FAIL: TestMinikubeProfile (10.14s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-691000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-691000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.987343208s)

                                                
                                                
-- stdout --
	* [mount-start-1-691000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-691000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-691000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-691000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-691000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-691000 -n mount-start-1-691000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-691000 -n mount-start-1-691000: exit status 7 (66.635041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-691000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.05s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-281000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-281000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.752456125s)

                                                
                                                
-- stdout --
	* [multinode-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-281000" primary control-plane node in "multinode-281000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-281000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:50:37.878012    3422 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:50:37.878134    3422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:50:37.878140    3422 out.go:304] Setting ErrFile to fd 2...
	I0719 11:50:37.878143    3422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:50:37.878270    3422 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:50:37.879287    3422 out.go:298] Setting JSON to false
	I0719 11:50:37.895688    3422 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3000,"bootTime":1721412037,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 11:50:37.895754    3422 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:50:37.901385    3422 out.go:177] * [multinode-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 11:50:37.908392    3422 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 11:50:37.908440    3422 notify.go:220] Checking for updates...
	I0719 11:50:37.915312    3422 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:50:37.918360    3422 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 11:50:37.921367    3422 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:50:37.924324    3422 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 11:50:37.927360    3422 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 11:50:37.930491    3422 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:50:37.935297    3422 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 11:50:37.941364    3422 start.go:297] selected driver: qemu2
	I0719 11:50:37.941371    3422 start.go:901] validating driver "qemu2" against <nil>
	I0719 11:50:37.941385    3422 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 11:50:37.943645    3422 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 11:50:37.946366    3422 out.go:177] * Automatically selected the socket_vmnet network
	I0719 11:50:37.949422    3422 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 11:50:37.949455    3422 cni.go:84] Creating CNI manager for ""
	I0719 11:50:37.949461    3422 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0719 11:50:37.949466    3422 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 11:50:37.949498    3422 start.go:340] cluster config:
	{Name:multinode-281000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:50:37.953455    3422 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:50:37.961327    3422 out.go:177] * Starting "multinode-281000" primary control-plane node in "multinode-281000" cluster
	I0719 11:50:37.965365    3422 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 11:50:37.965381    3422 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 11:50:37.965393    3422 cache.go:56] Caching tarball of preloaded images
	I0719 11:50:37.965467    3422 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 11:50:37.965473    3422 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 11:50:37.965716    3422 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/multinode-281000/config.json ...
	I0719 11:50:37.965729    3422 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/multinode-281000/config.json: {Name:mkcaf3d64fd3b8ea145fb04681016121cf509d75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 11:50:37.965948    3422 start.go:360] acquireMachinesLock for multinode-281000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:50:37.965983    3422 start.go:364] duration metric: took 29.25µs to acquireMachinesLock for "multinode-281000"
	I0719 11:50:37.965994    3422 start.go:93] Provisioning new machine with config: &{Name:multinode-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 11:50:37.966064    3422 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 11:50:37.974374    3422 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 11:50:37.992805    3422 start.go:159] libmachine.API.Create for "multinode-281000" (driver="qemu2")
	I0719 11:50:37.992834    3422 client.go:168] LocalClient.Create starting
	I0719 11:50:37.992894    3422 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 11:50:37.992925    3422 main.go:141] libmachine: Decoding PEM data...
	I0719 11:50:37.992934    3422 main.go:141] libmachine: Parsing certificate...
	I0719 11:50:37.992975    3422 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 11:50:37.992999    3422 main.go:141] libmachine: Decoding PEM data...
	I0719 11:50:37.993008    3422 main.go:141] libmachine: Parsing certificate...
	I0719 11:50:37.993364    3422 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 11:50:38.132753    3422 main.go:141] libmachine: Creating SSH key...
	I0719 11:50:38.227972    3422 main.go:141] libmachine: Creating Disk image...
	I0719 11:50:38.227982    3422 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 11:50:38.228157    3422 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/disk.qcow2
	I0719 11:50:38.237268    3422 main.go:141] libmachine: STDOUT: 
	I0719 11:50:38.237289    3422 main.go:141] libmachine: STDERR: 
	I0719 11:50:38.237342    3422 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/disk.qcow2 +20000M
	I0719 11:50:38.245320    3422 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 11:50:38.245334    3422 main.go:141] libmachine: STDERR: 
	I0719 11:50:38.245347    3422 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/disk.qcow2
	I0719 11:50:38.245352    3422 main.go:141] libmachine: Starting QEMU VM...
	I0719 11:50:38.245362    3422 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:50:38.245384    3422 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:ba:b0:3e:be:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/disk.qcow2
	I0719 11:50:38.246992    3422 main.go:141] libmachine: STDOUT: 
	I0719 11:50:38.247003    3422 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:50:38.247020    3422 client.go:171] duration metric: took 254.184375ms to LocalClient.Create
	I0719 11:50:40.249209    3422 start.go:128] duration metric: took 2.283153625s to createHost
	I0719 11:50:40.249312    3422 start.go:83] releasing machines lock for "multinode-281000", held for 2.283349167s
	W0719 11:50:40.249359    3422 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:50:40.265659    3422 out.go:177] * Deleting "multinode-281000" in qemu2 ...
	W0719 11:50:40.291284    3422 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:50:40.291331    3422 start.go:729] Will try again in 5 seconds ...
	I0719 11:50:45.293694    3422 start.go:360] acquireMachinesLock for multinode-281000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:50:45.294166    3422 start.go:364] duration metric: took 357.625µs to acquireMachinesLock for "multinode-281000"
	I0719 11:50:45.294294    3422 start.go:93] Provisioning new machine with config: &{Name:multinode-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 11:50:45.294623    3422 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 11:50:45.304300    3422 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 11:50:45.353617    3422 start.go:159] libmachine.API.Create for "multinode-281000" (driver="qemu2")
	I0719 11:50:45.353665    3422 client.go:168] LocalClient.Create starting
	I0719 11:50:45.353772    3422 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 11:50:45.353836    3422 main.go:141] libmachine: Decoding PEM data...
	I0719 11:50:45.353853    3422 main.go:141] libmachine: Parsing certificate...
	I0719 11:50:45.353921    3422 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 11:50:45.353965    3422 main.go:141] libmachine: Decoding PEM data...
	I0719 11:50:45.353981    3422 main.go:141] libmachine: Parsing certificate...
	I0719 11:50:45.354509    3422 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 11:50:45.503251    3422 main.go:141] libmachine: Creating SSH key...
	I0719 11:50:45.540722    3422 main.go:141] libmachine: Creating Disk image...
	I0719 11:50:45.540726    3422 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 11:50:45.540905    3422 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/disk.qcow2
	I0719 11:50:45.549960    3422 main.go:141] libmachine: STDOUT: 
	I0719 11:50:45.549990    3422 main.go:141] libmachine: STDERR: 
	I0719 11:50:45.550038    3422 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/disk.qcow2 +20000M
	I0719 11:50:45.557770    3422 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 11:50:45.557783    3422 main.go:141] libmachine: STDERR: 
	I0719 11:50:45.557793    3422 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/disk.qcow2
	I0719 11:50:45.557797    3422 main.go:141] libmachine: Starting QEMU VM...
	I0719 11:50:45.557805    3422 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:50:45.557827    3422 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:af:24:95:29:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/disk.qcow2
	I0719 11:50:45.559456    3422 main.go:141] libmachine: STDOUT: 
	I0719 11:50:45.559467    3422 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:50:45.559479    3422 client.go:171] duration metric: took 205.811292ms to LocalClient.Create
	I0719 11:50:47.561630    3422 start.go:128] duration metric: took 2.267008375s to createHost
	I0719 11:50:47.561839    3422 start.go:83] releasing machines lock for "multinode-281000", held for 2.267544083s
	W0719 11:50:47.562167    3422 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-281000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-281000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:50:47.571702    3422 out.go:177] 
	W0719 11:50:47.577871    3422 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 11:50:47.577896    3422 out.go:239] * 
	* 
	W0719 11:50:47.580800    3422 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 11:50:47.587758    3422 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-281000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000: exit status 7 (68.653083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-281000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.82s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (93.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-281000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-281000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (126.889916ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-281000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-281000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-281000 -- rollout status deployment/busybox: exit status 1 (57.868291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-281000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.760333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-281000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.928459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-281000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.77675ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-281000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.986542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-281000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.868917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-281000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.053375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-281000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.541625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-281000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.143042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-281000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.773167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-281000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0719 11:51:49.521179    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.724791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-281000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.051542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-281000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (54.434584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-281000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-281000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-281000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.4515ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-281000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-281000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-281000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.640667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-281000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-281000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-281000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (54.773166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-281000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000: exit status 7 (28.694459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-281000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (93.69s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-281000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (54.832875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-281000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000: exit status 7 (29.46425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-281000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-281000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-281000 -v 3 --alsologtostderr: exit status 83 (42.720709ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-281000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-281000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:52:21.474478    3511 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:52:21.474636    3511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:21.474639    3511 out.go:304] Setting ErrFile to fd 2...
	I0719 11:52:21.474641    3511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:21.474766    3511 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:52:21.475020    3511 mustload.go:65] Loading cluster: multinode-281000
	I0719 11:52:21.475208    3511 config.go:182] Loaded profile config "multinode-281000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:52:21.480114    3511 out.go:177] * The control-plane node multinode-281000 host is not running: state=Stopped
	I0719 11:52:21.484263    3511 out.go:177]   To start a cluster, run: "minikube start -p multinode-281000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-281000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000: exit status 7 (28.736166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-281000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-281000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-281000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (29.387959ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-281000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-281000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-281000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000: exit status 7 (29.07775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-281000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-281000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-281000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-281000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-281000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000: exit status 7 (29.339792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-281000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-281000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-281000 status --output json --alsologtostderr: exit status 7 (29.644708ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-281000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:52:21.678759    3523 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:52:21.678903    3523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:21.678906    3523 out.go:304] Setting ErrFile to fd 2...
	I0719 11:52:21.678908    3523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:21.679031    3523 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:52:21.679141    3523 out.go:298] Setting JSON to true
	I0719 11:52:21.679151    3523 mustload.go:65] Loading cluster: multinode-281000
	I0719 11:52:21.679193    3523 notify.go:220] Checking for updates...
	I0719 11:52:21.679342    3523 config.go:182] Loaded profile config "multinode-281000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:52:21.679349    3523 status.go:255] checking status of multinode-281000 ...
	I0719 11:52:21.679555    3523 status.go:330] multinode-281000 host status = "Stopped" (err=<nil>)
	I0719 11:52:21.679559    3523 status.go:343] host is not running, skipping remaining checks
	I0719 11:52:21.679561    3523 status.go:257] multinode-281000 status: &{Name:multinode-281000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-281000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000: exit status 7 (29.43175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-281000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-281000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-281000 node stop m03: exit status 85 (44.514ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-281000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-281000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-281000 status: exit status 7 (29.760042ms)

                                                
                                                
-- stdout --
	multinode-281000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-281000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-281000 status --alsologtostderr: exit status 7 (28.396625ms)

                                                
                                                
-- stdout --
	multinode-281000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:52:21.811661    3531 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:52:21.811813    3531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:21.811817    3531 out.go:304] Setting ErrFile to fd 2...
	I0719 11:52:21.811819    3531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:21.811939    3531 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:52:21.812047    3531 out.go:298] Setting JSON to false
	I0719 11:52:21.812060    3531 mustload.go:65] Loading cluster: multinode-281000
	I0719 11:52:21.812105    3531 notify.go:220] Checking for updates...
	I0719 11:52:21.812277    3531 config.go:182] Loaded profile config "multinode-281000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:52:21.812285    3531 status.go:255] checking status of multinode-281000 ...
	I0719 11:52:21.812481    3531 status.go:330] multinode-281000 host status = "Stopped" (err=<nil>)
	I0719 11:52:21.812485    3531 status.go:343] host is not running, skipping remaining checks
	I0719 11:52:21.812487    3531 status.go:257] multinode-281000 status: &{Name:multinode-281000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-281000 status --alsologtostderr": multinode-281000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000: exit status 7 (29.329833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-281000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (59.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-281000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-281000 node start m03 -v=7 --alsologtostderr: exit status 85 (44.157042ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:52:21.870795    3535 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:52:21.871081    3535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:21.871084    3535 out.go:304] Setting ErrFile to fd 2...
	I0719 11:52:21.871086    3535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:21.871251    3535 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:52:21.871457    3535 mustload.go:65] Loading cluster: multinode-281000
	I0719 11:52:21.871645    3535 config.go:182] Loaded profile config "multinode-281000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:52:21.875361    3535 out.go:177] 
	W0719 11:52:21.878221    3535 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0719 11:52:21.878227    3535 out.go:239] * 
	* 
	W0719 11:52:21.879846    3535 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 11:52:21.883181    3535 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0719 11:52:21.870795    3535 out.go:291] Setting OutFile to fd 1 ...
I0719 11:52:21.871081    3535 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:52:21.871084    3535 out.go:304] Setting ErrFile to fd 2...
I0719 11:52:21.871086    3535 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:52:21.871251    3535 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
I0719 11:52:21.871457    3535 mustload.go:65] Loading cluster: multinode-281000
I0719 11:52:21.871645    3535 config.go:182] Loaded profile config "multinode-281000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 11:52:21.875361    3535 out.go:177] 
W0719 11:52:21.878221    3535 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0719 11:52:21.878227    3535 out.go:239] * 
* 
W0719 11:52:21.879846    3535 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0719 11:52:21.883181    3535 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-281000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-281000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-281000 status -v=7 --alsologtostderr: exit status 7 (29.175542ms)

                                                
                                                
-- stdout --
	multinode-281000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:52:21.914750    3537 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:52:21.914920    3537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:21.914923    3537 out.go:304] Setting ErrFile to fd 2...
	I0719 11:52:21.914926    3537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:21.915073    3537 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:52:21.915190    3537 out.go:298] Setting JSON to false
	I0719 11:52:21.915201    3537 mustload.go:65] Loading cluster: multinode-281000
	I0719 11:52:21.915253    3537 notify.go:220] Checking for updates...
	I0719 11:52:21.915390    3537 config.go:182] Loaded profile config "multinode-281000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:52:21.915396    3537 status.go:255] checking status of multinode-281000 ...
	I0719 11:52:21.915597    3537 status.go:330] multinode-281000 host status = "Stopped" (err=<nil>)
	I0719 11:52:21.915601    3537 status.go:343] host is not running, skipping remaining checks
	I0719 11:52:21.915603    3537 status.go:257] multinode-281000 status: &{Name:multinode-281000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-281000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-281000 status -v=7 --alsologtostderr: exit status 7 (72.871708ms)

                                                
                                                
-- stdout --
	multinode-281000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:52:22.736022    3539 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:52:22.736258    3539 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:22.736263    3539 out.go:304] Setting ErrFile to fd 2...
	I0719 11:52:22.736267    3539 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:22.736474    3539 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:52:22.736666    3539 out.go:298] Setting JSON to false
	I0719 11:52:22.736681    3539 mustload.go:65] Loading cluster: multinode-281000
	I0719 11:52:22.736732    3539 notify.go:220] Checking for updates...
	I0719 11:52:22.736988    3539 config.go:182] Loaded profile config "multinode-281000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:52:22.736997    3539 status.go:255] checking status of multinode-281000 ...
	I0719 11:52:22.737309    3539 status.go:330] multinode-281000 host status = "Stopped" (err=<nil>)
	I0719 11:52:22.737314    3539 status.go:343] host is not running, skipping remaining checks
	I0719 11:52:22.737317    3539 status.go:257] multinode-281000 status: &{Name:multinode-281000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-281000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-281000 status -v=7 --alsologtostderr: exit status 7 (71.84675ms)

                                                
                                                
-- stdout --
	multinode-281000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:52:24.165440    3541 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:52:24.165667    3541 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:24.165671    3541 out.go:304] Setting ErrFile to fd 2...
	I0719 11:52:24.165674    3541 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:24.165854    3541 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:52:24.166017    3541 out.go:298] Setting JSON to false
	I0719 11:52:24.166031    3541 mustload.go:65] Loading cluster: multinode-281000
	I0719 11:52:24.166065    3541 notify.go:220] Checking for updates...
	I0719 11:52:24.166288    3541 config.go:182] Loaded profile config "multinode-281000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:52:24.166296    3541 status.go:255] checking status of multinode-281000 ...
	I0719 11:52:24.166578    3541 status.go:330] multinode-281000 host status = "Stopped" (err=<nil>)
	I0719 11:52:24.166583    3541 status.go:343] host is not running, skipping remaining checks
	I0719 11:52:24.166586    3541 status.go:257] multinode-281000 status: &{Name:multinode-281000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-281000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-281000 status -v=7 --alsologtostderr: exit status 7 (72.666416ms)

                                                
                                                
-- stdout --
	multinode-281000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:52:26.299600    3548 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:52:26.299829    3548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:26.299834    3548 out.go:304] Setting ErrFile to fd 2...
	I0719 11:52:26.299838    3548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:26.300029    3548 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:52:26.300217    3548 out.go:298] Setting JSON to false
	I0719 11:52:26.300239    3548 mustload.go:65] Loading cluster: multinode-281000
	I0719 11:52:26.300285    3548 notify.go:220] Checking for updates...
	I0719 11:52:26.300543    3548 config.go:182] Loaded profile config "multinode-281000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:52:26.300552    3548 status.go:255] checking status of multinode-281000 ...
	I0719 11:52:26.300877    3548 status.go:330] multinode-281000 host status = "Stopped" (err=<nil>)
	I0719 11:52:26.300882    3548 status.go:343] host is not running, skipping remaining checks
	I0719 11:52:26.300885    3548 status.go:257] multinode-281000 status: &{Name:multinode-281000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-281000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-281000 status -v=7 --alsologtostderr: exit status 7 (70.191541ms)

                                                
                                                
-- stdout --
	multinode-281000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:52:30.662311    3553 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:52:30.662562    3553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:30.662567    3553 out.go:304] Setting ErrFile to fd 2...
	I0719 11:52:30.662569    3553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:30.662743    3553 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:52:30.662926    3553 out.go:298] Setting JSON to false
	I0719 11:52:30.662942    3553 mustload.go:65] Loading cluster: multinode-281000
	I0719 11:52:30.662978    3553 notify.go:220] Checking for updates...
	I0719 11:52:30.663195    3553 config.go:182] Loaded profile config "multinode-281000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:52:30.663203    3553 status.go:255] checking status of multinode-281000 ...
	I0719 11:52:30.663480    3553 status.go:330] multinode-281000 host status = "Stopped" (err=<nil>)
	I0719 11:52:30.663485    3553 status.go:343] host is not running, skipping remaining checks
	I0719 11:52:30.663488    3553 status.go:257] multinode-281000 status: &{Name:multinode-281000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-281000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-281000 status -v=7 --alsologtostderr: exit status 7 (70.355708ms)

                                                
                                                
-- stdout --
	multinode-281000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:52:33.296471    3555 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:52:33.296685    3555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:33.296689    3555 out.go:304] Setting ErrFile to fd 2...
	I0719 11:52:33.296693    3555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:33.296853    3555 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:52:33.297018    3555 out.go:298] Setting JSON to false
	I0719 11:52:33.297035    3555 mustload.go:65] Loading cluster: multinode-281000
	I0719 11:52:33.297071    3555 notify.go:220] Checking for updates...
	I0719 11:52:33.297306    3555 config.go:182] Loaded profile config "multinode-281000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:52:33.297314    3555 status.go:255] checking status of multinode-281000 ...
	I0719 11:52:33.297634    3555 status.go:330] multinode-281000 host status = "Stopped" (err=<nil>)
	I0719 11:52:33.297639    3555 status.go:343] host is not running, skipping remaining checks
	I0719 11:52:33.297642    3555 status.go:257] multinode-281000 status: &{Name:multinode-281000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-281000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-281000 status -v=7 --alsologtostderr: exit status 7 (72.296084ms)

                                                
                                                
-- stdout --
	multinode-281000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:52:39.967384    3557 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:52:39.967586    3557 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:39.967591    3557 out.go:304] Setting ErrFile to fd 2...
	I0719 11:52:39.967594    3557 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:39.967767    3557 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:52:39.967915    3557 out.go:298] Setting JSON to false
	I0719 11:52:39.967928    3557 mustload.go:65] Loading cluster: multinode-281000
	I0719 11:52:39.967975    3557 notify.go:220] Checking for updates...
	I0719 11:52:39.968171    3557 config.go:182] Loaded profile config "multinode-281000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:52:39.968178    3557 status.go:255] checking status of multinode-281000 ...
	I0719 11:52:39.968487    3557 status.go:330] multinode-281000 host status = "Stopped" (err=<nil>)
	I0719 11:52:39.968492    3557 status.go:343] host is not running, skipping remaining checks
	I0719 11:52:39.968495    3557 status.go:257] multinode-281000 status: &{Name:multinode-281000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-281000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-281000 status -v=7 --alsologtostderr: exit status 7 (70.303083ms)

                                                
                                                
-- stdout --
	multinode-281000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:52:51.809728    3564 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:52:51.810041    3564 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:51.810046    3564 out.go:304] Setting ErrFile to fd 2...
	I0719 11:52:51.810049    3564 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:52:51.810226    3564 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:52:51.810407    3564 out.go:298] Setting JSON to false
	I0719 11:52:51.810422    3564 mustload.go:65] Loading cluster: multinode-281000
	I0719 11:52:51.810467    3564 notify.go:220] Checking for updates...
	I0719 11:52:51.810709    3564 config.go:182] Loaded profile config "multinode-281000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:52:51.810718    3564 status.go:255] checking status of multinode-281000 ...
	I0719 11:52:51.811023    3564 status.go:330] multinode-281000 host status = "Stopped" (err=<nil>)
	I0719 11:52:51.811028    3564 status.go:343] host is not running, skipping remaining checks
	I0719 11:52:51.811032    3564 status.go:257] multinode-281000 status: &{Name:multinode-281000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-281000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-281000 status -v=7 --alsologtostderr: exit status 7 (75.086875ms)

                                                
                                                
-- stdout --
	multinode-281000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:53:03.564639    3568 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:53:03.564860    3568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:53:03.564865    3568 out.go:304] Setting ErrFile to fd 2...
	I0719 11:53:03.564868    3568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:53:03.565058    3568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:53:03.565203    3568 out.go:298] Setting JSON to false
	I0719 11:53:03.565216    3568 mustload.go:65] Loading cluster: multinode-281000
	I0719 11:53:03.565261    3568 notify.go:220] Checking for updates...
	I0719 11:53:03.565461    3568 config.go:182] Loaded profile config "multinode-281000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:53:03.565469    3568 status.go:255] checking status of multinode-281000 ...
	I0719 11:53:03.565753    3568 status.go:330] multinode-281000 host status = "Stopped" (err=<nil>)
	I0719 11:53:03.565758    3568 status.go:343] host is not running, skipping remaining checks
	I0719 11:53:03.565761    3568 status.go:257] multinode-281000 status: &{Name:multinode-281000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-281000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-281000 status -v=7 --alsologtostderr: exit status 7 (72.811458ms)

                                                
                                                
-- stdout --
	multinode-281000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:53:21.511469    3574 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:53:21.511722    3574 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:53:21.511727    3574 out.go:304] Setting ErrFile to fd 2...
	I0719 11:53:21.511730    3574 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:53:21.511902    3574 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:53:21.512075    3574 out.go:298] Setting JSON to false
	I0719 11:53:21.512089    3574 mustload.go:65] Loading cluster: multinode-281000
	I0719 11:53:21.512133    3574 notify.go:220] Checking for updates...
	I0719 11:53:21.512341    3574 config.go:182] Loaded profile config "multinode-281000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:53:21.512350    3574 status.go:255] checking status of multinode-281000 ...
	I0719 11:53:21.512672    3574 status.go:330] multinode-281000 host status = "Stopped" (err=<nil>)
	I0719 11:53:21.512677    3574 status.go:343] host is not running, skipping remaining checks
	I0719 11:53:21.512680    3574 status.go:257] multinode-281000 status: &{Name:multinode-281000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-281000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000: exit status 7 (32.833917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-281000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (59.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-281000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-281000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-281000: (3.775897917s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-281000 --wait=true -v=8 --alsologtostderr
E0719 11:53:27.274195    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-281000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.225231459s)

                                                
                                                
-- stdout --
	* [multinode-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-281000" primary control-plane node in "multinode-281000" cluster
	* Restarting existing qemu2 VM for "multinode-281000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-281000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:53:25.418287    3598 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:53:25.418501    3598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:53:25.418506    3598 out.go:304] Setting ErrFile to fd 2...
	I0719 11:53:25.418510    3598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:53:25.418681    3598 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:53:25.420022    3598 out.go:298] Setting JSON to false
	I0719 11:53:25.439869    3598 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3168,"bootTime":1721412037,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 11:53:25.439941    3598 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:53:25.445063    3598 out.go:177] * [multinode-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 11:53:25.451922    3598 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 11:53:25.451968    3598 notify.go:220] Checking for updates...
	I0719 11:53:25.457242    3598 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:53:25.459902    3598 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 11:53:25.462973    3598 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:53:25.465941    3598 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 11:53:25.468999    3598 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 11:53:25.472223    3598 config.go:182] Loaded profile config "multinode-281000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:53:25.472275    3598 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:53:25.476930    3598 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 11:53:25.483952    3598 start.go:297] selected driver: qemu2
	I0719 11:53:25.483961    3598 start.go:901] validating driver "qemu2" against &{Name:multinode-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:53:25.484028    3598 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 11:53:25.486485    3598 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 11:53:25.486509    3598 cni.go:84] Creating CNI manager for ""
	I0719 11:53:25.486519    3598 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 11:53:25.486573    3598 start.go:340] cluster config:
	{Name:multinode-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-281000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:53:25.490426    3598 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:53:25.497921    3598 out.go:177] * Starting "multinode-281000" primary control-plane node in "multinode-281000" cluster
	I0719 11:53:25.501917    3598 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 11:53:25.501939    3598 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 11:53:25.501950    3598 cache.go:56] Caching tarball of preloaded images
	I0719 11:53:25.502028    3598 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 11:53:25.502034    3598 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 11:53:25.502092    3598 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/multinode-281000/config.json ...
	I0719 11:53:25.502511    3598 start.go:360] acquireMachinesLock for multinode-281000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:53:25.502549    3598 start.go:364] duration metric: took 31.375µs to acquireMachinesLock for "multinode-281000"
	I0719 11:53:25.502558    3598 start.go:96] Skipping create...Using existing machine configuration
	I0719 11:53:25.502566    3598 fix.go:54] fixHost starting: 
	I0719 11:53:25.502700    3598 fix.go:112] recreateIfNeeded on multinode-281000: state=Stopped err=<nil>
	W0719 11:53:25.502711    3598 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 11:53:25.510992    3598 out.go:177] * Restarting existing qemu2 VM for "multinode-281000" ...
	I0719 11:53:25.513894    3598 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:53:25.513936    3598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:af:24:95:29:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/disk.qcow2
	I0719 11:53:25.516128    3598 main.go:141] libmachine: STDOUT: 
	I0719 11:53:25.516155    3598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:53:25.516187    3598 fix.go:56] duration metric: took 13.620792ms for fixHost
	I0719 11:53:25.516193    3598 start.go:83] releasing machines lock for "multinode-281000", held for 13.639625ms
	W0719 11:53:25.516200    3598 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 11:53:25.516230    3598 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:53:25.516236    3598 start.go:729] Will try again in 5 seconds ...
	I0719 11:53:30.518324    3598 start.go:360] acquireMachinesLock for multinode-281000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:53:30.518806    3598 start.go:364] duration metric: took 334.791µs to acquireMachinesLock for "multinode-281000"
	I0719 11:53:30.518976    3598 start.go:96] Skipping create...Using existing machine configuration
	I0719 11:53:30.518999    3598 fix.go:54] fixHost starting: 
	I0719 11:53:30.519718    3598 fix.go:112] recreateIfNeeded on multinode-281000: state=Stopped err=<nil>
	W0719 11:53:30.519750    3598 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 11:53:30.524173    3598 out.go:177] * Restarting existing qemu2 VM for "multinode-281000" ...
	I0719 11:53:30.532018    3598 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:53:30.532266    3598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:af:24:95:29:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/disk.qcow2
	I0719 11:53:30.541366    3598 main.go:141] libmachine: STDOUT: 
	I0719 11:53:30.541464    3598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:53:30.541547    3598 fix.go:56] duration metric: took 22.552417ms for fixHost
	I0719 11:53:30.541575    3598 start.go:83] releasing machines lock for "multinode-281000", held for 22.715209ms
	W0719 11:53:30.541774    3598 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-281000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-281000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:53:30.549287    3598 out.go:177] 
	W0719 11:53:30.553251    3598 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 11:53:30.553363    3598 out.go:239] * 
	* 
	W0719 11:53:30.556043    3598 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 11:53:30.564176    3598 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-281000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-281000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000: exit status 7 (32.544333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-281000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-281000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-281000 node delete m03: exit status 83 (37.827375ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-281000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-281000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-281000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-281000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-281000 status --alsologtostderr: exit status 7 (28.817167ms)

                                                
                                                
-- stdout --
	multinode-281000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:53:30.743658    3614 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:53:30.743797    3614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:53:30.743801    3614 out.go:304] Setting ErrFile to fd 2...
	I0719 11:53:30.743803    3614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:53:30.743938    3614 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:53:30.744055    3614 out.go:298] Setting JSON to false
	I0719 11:53:30.744065    3614 mustload.go:65] Loading cluster: multinode-281000
	I0719 11:53:30.744125    3614 notify.go:220] Checking for updates...
	I0719 11:53:30.744247    3614 config.go:182] Loaded profile config "multinode-281000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:53:30.744253    3614 status.go:255] checking status of multinode-281000 ...
	I0719 11:53:30.744458    3614 status.go:330] multinode-281000 host status = "Stopped" (err=<nil>)
	I0719 11:53:30.744462    3614 status.go:343] host is not running, skipping remaining checks
	I0719 11:53:30.744464    3614 status.go:257] multinode-281000 status: &{Name:multinode-281000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-281000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000: exit status 7 (28.59975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-281000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-281000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-281000 stop: (3.2203165s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-281000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-281000 status: exit status 7 (61.416916ms)

                                                
                                                
-- stdout --
	multinode-281000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-281000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-281000 status --alsologtostderr: exit status 7 (32.781084ms)

                                                
                                                
-- stdout --
	multinode-281000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:53:34.087362    3638 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:53:34.087546    3638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:53:34.087554    3638 out.go:304] Setting ErrFile to fd 2...
	I0719 11:53:34.087557    3638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:53:34.087679    3638 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:53:34.087800    3638 out.go:298] Setting JSON to false
	I0719 11:53:34.087833    3638 mustload.go:65] Loading cluster: multinode-281000
	I0719 11:53:34.087861    3638 notify.go:220] Checking for updates...
	I0719 11:53:34.088026    3638 config.go:182] Loaded profile config "multinode-281000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:53:34.088032    3638 status.go:255] checking status of multinode-281000 ...
	I0719 11:53:34.088235    3638 status.go:330] multinode-281000 host status = "Stopped" (err=<nil>)
	I0719 11:53:34.088239    3638 status.go:343] host is not running, skipping remaining checks
	I0719 11:53:34.088241    3638 status.go:257] multinode-281000 status: &{Name:multinode-281000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-281000 status --alsologtostderr": multinode-281000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-281000 status --alsologtostderr": multinode-281000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000: exit status 7 (28.559625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-281000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-281000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-281000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.186057458s)

                                                
                                                
-- stdout --
	* [multinode-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-281000" primary control-plane node in "multinode-281000" cluster
	* Restarting existing qemu2 VM for "multinode-281000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-281000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:53:34.144952    3642 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:53:34.145082    3642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:53:34.145086    3642 out.go:304] Setting ErrFile to fd 2...
	I0719 11:53:34.145088    3642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:53:34.145221    3642 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:53:34.146234    3642 out.go:298] Setting JSON to false
	I0719 11:53:34.162280    3642 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3177,"bootTime":1721412037,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 11:53:34.162381    3642 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:53:34.167120    3642 out.go:177] * [multinode-281000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 11:53:34.178967    3642 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 11:53:34.179000    3642 notify.go:220] Checking for updates...
	I0719 11:53:34.184230    3642 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:53:34.186989    3642 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 11:53:34.190039    3642 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:53:34.193039    3642 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 11:53:34.196068    3642 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 11:53:34.199315    3642 config.go:182] Loaded profile config "multinode-281000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:53:34.199571    3642 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:53:34.203998    3642 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 11:53:34.211054    3642 start.go:297] selected driver: qemu2
	I0719 11:53:34.211062    3642 start.go:901] validating driver "qemu2" against &{Name:multinode-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-281000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:53:34.211130    3642 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 11:53:34.213537    3642 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 11:53:34.213560    3642 cni.go:84] Creating CNI manager for ""
	I0719 11:53:34.213564    3642 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 11:53:34.213605    3642 start.go:340] cluster config:
	{Name:multinode-281000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-281000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:53:34.217213    3642 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:53:34.224985    3642 out.go:177] * Starting "multinode-281000" primary control-plane node in "multinode-281000" cluster
	I0719 11:53:34.229018    3642 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 11:53:34.229037    3642 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 11:53:34.229051    3642 cache.go:56] Caching tarball of preloaded images
	I0719 11:53:34.229104    3642 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 11:53:34.229119    3642 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 11:53:34.229175    3642 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/multinode-281000/config.json ...
	I0719 11:53:34.229582    3642 start.go:360] acquireMachinesLock for multinode-281000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:53:34.229612    3642 start.go:364] duration metric: took 24.458µs to acquireMachinesLock for "multinode-281000"
	I0719 11:53:34.229620    3642 start.go:96] Skipping create...Using existing machine configuration
	I0719 11:53:34.229625    3642 fix.go:54] fixHost starting: 
	I0719 11:53:34.229735    3642 fix.go:112] recreateIfNeeded on multinode-281000: state=Stopped err=<nil>
	W0719 11:53:34.229742    3642 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 11:53:34.233994    3642 out.go:177] * Restarting existing qemu2 VM for "multinode-281000" ...
	I0719 11:53:34.241975    3642 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:53:34.242020    3642 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:af:24:95:29:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/disk.qcow2
	I0719 11:53:34.243941    3642 main.go:141] libmachine: STDOUT: 
	I0719 11:53:34.243959    3642 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:53:34.243989    3642 fix.go:56] duration metric: took 14.363375ms for fixHost
	I0719 11:53:34.243993    3642 start.go:83] releasing machines lock for "multinode-281000", held for 14.377292ms
	W0719 11:53:34.243999    3642 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 11:53:34.244022    3642 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:53:34.244027    3642 start.go:729] Will try again in 5 seconds ...
	I0719 11:53:39.246240    3642 start.go:360] acquireMachinesLock for multinode-281000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:53:39.246820    3642 start.go:364] duration metric: took 454.625µs to acquireMachinesLock for "multinode-281000"
	I0719 11:53:39.247013    3642 start.go:96] Skipping create...Using existing machine configuration
	I0719 11:53:39.247035    3642 fix.go:54] fixHost starting: 
	I0719 11:53:39.247745    3642 fix.go:112] recreateIfNeeded on multinode-281000: state=Stopped err=<nil>
	W0719 11:53:39.247772    3642 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 11:53:39.252258    3642 out.go:177] * Restarting existing qemu2 VM for "multinode-281000" ...
	I0719 11:53:39.260193    3642 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:53:39.260430    3642 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:af:24:95:29:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/multinode-281000/disk.qcow2
	I0719 11:53:39.269837    3642 main.go:141] libmachine: STDOUT: 
	I0719 11:53:39.269888    3642 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:53:39.270012    3642 fix.go:56] duration metric: took 22.976542ms for fixHost
	I0719 11:53:39.270029    3642 start.go:83] releasing machines lock for "multinode-281000", held for 23.181542ms
	W0719 11:53:39.270209    3642 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-281000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-281000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:53:39.277187    3642 out.go:177] 
	W0719 11:53:39.281271    3642 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 11:53:39.281295    3642 out.go:239] * 
	* 
	W0719 11:53:39.283918    3642 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 11:53:39.291177    3642 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-281000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000: exit status 7 (67.7945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-281000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-281000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-281000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-281000-m01 --driver=qemu2 : exit status 80 (9.762479166s)

                                                
                                                
-- stdout --
	* [multinode-281000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-281000-m01" primary control-plane node in "multinode-281000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-281000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-281000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-281000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-281000-m02 --driver=qemu2 : exit status 80 (9.885903917s)

                                                
                                                
-- stdout --
	* [multinode-281000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-281000-m02" primary control-plane node in "multinode-281000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-281000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-281000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-281000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-281000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-281000: exit status 83 (77.733875ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-281000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-281000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-281000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-281000 -n multinode-281000: exit status 7 (29.611125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-281000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.87s)

                                                
                                    
x
+
TestPreload (10.11s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-707000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-707000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.96875875s)

                                                
                                                
-- stdout --
	* [test-preload-707000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-707000" primary control-plane node in "test-preload-707000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-707000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:53:59.373838    3698 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:53:59.373974    3698 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:53:59.373977    3698 out.go:304] Setting ErrFile to fd 2...
	I0719 11:53:59.373979    3698 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:53:59.374098    3698 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:53:59.375143    3698 out.go:298] Setting JSON to false
	I0719 11:53:59.391122    3698 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3202,"bootTime":1721412037,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 11:53:59.391186    3698 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:53:59.397682    3698 out.go:177] * [test-preload-707000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 11:53:59.403608    3698 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 11:53:59.403648    3698 notify.go:220] Checking for updates...
	I0719 11:53:59.411626    3698 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:53:59.414586    3698 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 11:53:59.417658    3698 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:53:59.420684    3698 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 11:53:59.423621    3698 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 11:53:59.426933    3698 config.go:182] Loaded profile config "multinode-281000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:53:59.426987    3698 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:53:59.430646    3698 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 11:53:59.437608    3698 start.go:297] selected driver: qemu2
	I0719 11:53:59.437614    3698 start.go:901] validating driver "qemu2" against <nil>
	I0719 11:53:59.437620    3698 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 11:53:59.440053    3698 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 11:53:59.443636    3698 out.go:177] * Automatically selected the socket_vmnet network
	I0719 11:53:59.446680    3698 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 11:53:59.446715    3698 cni.go:84] Creating CNI manager for ""
	I0719 11:53:59.446723    3698 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 11:53:59.446732    3698 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 11:53:59.446760    3698 start.go:340] cluster config:
	{Name:test-preload-707000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-707000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:53:59.450536    3698 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:53:59.457685    3698 out.go:177] * Starting "test-preload-707000" primary control-plane node in "test-preload-707000" cluster
	I0719 11:53:59.461619    3698 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0719 11:53:59.461701    3698 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/test-preload-707000/config.json ...
	I0719 11:53:59.461728    3698 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/test-preload-707000/config.json: {Name:mka2ba54d176a468bfc60109d209576c91724e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 11:53:59.461746    3698 cache.go:107] acquiring lock: {Name:mkf3de4290b7ea2a2cf08483b15bdd55dce00d1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:53:59.461758    3698 cache.go:107] acquiring lock: {Name:mk341571a868320e86eb789d7cf91dcd1b785866 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:53:59.461793    3698 cache.go:107] acquiring lock: {Name:mk8ba97f614bb2f779f3a9cbb1883510667b826a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:53:59.461959    3698 cache.go:107] acquiring lock: {Name:mk1ce6ba3d982e995cfb7adf542dd79db2a8f462 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:53:59.462004    3698 cache.go:107] acquiring lock: {Name:mkda044c7c9a11981574fa0841cb16bca18c8450 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:53:59.462010    3698 cache.go:107] acquiring lock: {Name:mk9ff6efa5380d9af300195d2412e70d01f103b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:53:59.462011    3698 cache.go:107] acquiring lock: {Name:mk22abcd5a7f4bf79ce8a156364d57d2a92d30b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:53:59.462133    3698 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0719 11:53:59.462143    3698 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0719 11:53:59.462147    3698 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0719 11:53:59.462159    3698 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 11:53:59.462187    3698 cache.go:107] acquiring lock: {Name:mk31275b8596c212a12317ab40a3301fe492e429 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:53:59.462210    3698 start.go:360] acquireMachinesLock for test-preload-707000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:53:59.462251    3698 start.go:364] duration metric: took 34.667µs to acquireMachinesLock for "test-preload-707000"
	I0719 11:53:59.462316    3698 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0719 11:53:59.462328    3698 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0719 11:53:59.462347    3698 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0719 11:53:59.462265    3698 start.go:93] Provisioning new machine with config: &{Name:test-preload-707000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-707000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 11:53:59.462449    3698 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 11:53:59.462513    3698 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 11:53:59.468614    3698 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 11:53:59.471447    3698 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0719 11:53:59.472313    3698 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0719 11:53:59.472364    3698 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0719 11:53:59.472466    3698 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0719 11:53:59.474464    3698 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0719 11:53:59.474711    3698 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 11:53:59.474772    3698 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0719 11:53:59.474772    3698 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 11:53:59.487128    3698 start.go:159] libmachine.API.Create for "test-preload-707000" (driver="qemu2")
	I0719 11:53:59.487157    3698 client.go:168] LocalClient.Create starting
	I0719 11:53:59.487242    3698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 11:53:59.487277    3698 main.go:141] libmachine: Decoding PEM data...
	I0719 11:53:59.487289    3698 main.go:141] libmachine: Parsing certificate...
	I0719 11:53:59.487332    3698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 11:53:59.487356    3698 main.go:141] libmachine: Decoding PEM data...
	I0719 11:53:59.487367    3698 main.go:141] libmachine: Parsing certificate...
	I0719 11:53:59.487727    3698 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 11:53:59.638856    3698 main.go:141] libmachine: Creating SSH key...
	I0719 11:53:59.752785    3698 main.go:141] libmachine: Creating Disk image...
	I0719 11:53:59.752805    3698 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 11:53:59.752974    3698 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/test-preload-707000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/test-preload-707000/disk.qcow2
	I0719 11:53:59.762868    3698 main.go:141] libmachine: STDOUT: 
	I0719 11:53:59.762888    3698 main.go:141] libmachine: STDERR: 
	I0719 11:53:59.762935    3698 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/test-preload-707000/disk.qcow2 +20000M
	I0719 11:53:59.771925    3698 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 11:53:59.771944    3698 main.go:141] libmachine: STDERR: 
	I0719 11:53:59.771956    3698 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/test-preload-707000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/test-preload-707000/disk.qcow2
	I0719 11:53:59.771962    3698 main.go:141] libmachine: Starting QEMU VM...
	I0719 11:53:59.771972    3698 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:53:59.771999    3698 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/test-preload-707000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/test-preload-707000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/test-preload-707000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:5c:4c:b5:ef:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/test-preload-707000/disk.qcow2
	I0719 11:53:59.774173    3698 main.go:141] libmachine: STDOUT: 
	I0719 11:53:59.774202    3698 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:53:59.774218    3698 client.go:171] duration metric: took 287.060625ms to LocalClient.Create
	I0719 11:53:59.903728    3698 cache.go:162] opening:  /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0719 11:53:59.953218    3698 cache.go:162] opening:  /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0719 11:53:59.981601    3698 cache.go:162] opening:  /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0719 11:53:59.985472    3698 cache.go:162] opening:  /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W0719 11:53:59.993671    3698 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0719 11:53:59.993703    3698 cache.go:162] opening:  /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0719 11:54:00.053754    3698 cache.go:162] opening:  /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0719 11:54:00.108631    3698 cache.go:162] opening:  /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0719 11:54:00.108654    3698 cache.go:157] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0719 11:54:00.108680    3698 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 646.782375ms
	I0719 11:54:00.108711    3698 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0719 11:54:00.448634    3698 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0719 11:54:00.448740    3698 cache.go:162] opening:  /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0719 11:54:00.697971    3698 cache.go:157] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0719 11:54:00.698021    3698 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.236285334s
	I0719 11:54:00.698046    3698 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0719 11:54:01.774448    3698 start.go:128] duration metric: took 2.3119895s to createHost
	I0719 11:54:01.774514    3698 start.go:83] releasing machines lock for "test-preload-707000", held for 2.312283334s
	W0719 11:54:01.774577    3698 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:54:01.786792    3698 out.go:177] * Deleting "test-preload-707000" in qemu2 ...
	W0719 11:54:01.813378    3698 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:54:01.813406    3698 start.go:729] Will try again in 5 seconds ...
	I0719 11:54:02.017727    3698 cache.go:157] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0719 11:54:02.017775    3698 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.555825334s
	I0719 11:54:02.017803    3698 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0719 11:54:02.098386    3698 cache.go:157] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0719 11:54:02.098430    3698 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.636472084s
	I0719 11:54:02.098498    3698 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0719 11:54:04.247892    3698 cache.go:157] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0719 11:54:04.247942    3698 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 4.78581575s
	I0719 11:54:04.247966    3698 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0719 11:54:04.472395    3698 cache.go:157] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0719 11:54:04.472439    3698 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.010758875s
	I0719 11:54:04.472464    3698 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0719 11:54:05.040674    3698 cache.go:157] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0719 11:54:05.040729    3698 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.579038625s
	I0719 11:54:05.040754    3698 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0719 11:54:06.813608    3698 start.go:360] acquireMachinesLock for test-preload-707000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:54:06.814041    3698 start.go:364] duration metric: took 361.084µs to acquireMachinesLock for "test-preload-707000"
	I0719 11:54:06.814180    3698 start.go:93] Provisioning new machine with config: &{Name:test-preload-707000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-707000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 11:54:06.814395    3698 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 11:54:06.820065    3698 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 11:54:06.869570    3698 start.go:159] libmachine.API.Create for "test-preload-707000" (driver="qemu2")
	I0719 11:54:06.869608    3698 client.go:168] LocalClient.Create starting
	I0719 11:54:06.869746    3698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 11:54:06.869811    3698 main.go:141] libmachine: Decoding PEM data...
	I0719 11:54:06.869833    3698 main.go:141] libmachine: Parsing certificate...
	I0719 11:54:06.869889    3698 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 11:54:06.869934    3698 main.go:141] libmachine: Decoding PEM data...
	I0719 11:54:06.869954    3698 main.go:141] libmachine: Parsing certificate...
	I0719 11:54:06.870441    3698 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 11:54:07.020153    3698 main.go:141] libmachine: Creating SSH key...
	I0719 11:54:07.244249    3698 main.go:141] libmachine: Creating Disk image...
	I0719 11:54:07.244258    3698 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 11:54:07.244463    3698 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/test-preload-707000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/test-preload-707000/disk.qcow2
	I0719 11:54:07.254273    3698 main.go:141] libmachine: STDOUT: 
	I0719 11:54:07.254296    3698 main.go:141] libmachine: STDERR: 
	I0719 11:54:07.254355    3698 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/test-preload-707000/disk.qcow2 +20000M
	I0719 11:54:07.262537    3698 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 11:54:07.262553    3698 main.go:141] libmachine: STDERR: 
	I0719 11:54:07.262568    3698 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/test-preload-707000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/test-preload-707000/disk.qcow2
	I0719 11:54:07.262574    3698 main.go:141] libmachine: Starting QEMU VM...
	I0719 11:54:07.262581    3698 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:54:07.262619    3698 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/test-preload-707000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/test-preload-707000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/test-preload-707000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:b1:80:8e:16:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/test-preload-707000/disk.qcow2
	I0719 11:54:07.264364    3698 main.go:141] libmachine: STDOUT: 
	I0719 11:54:07.264378    3698 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:54:07.264391    3698 client.go:171] duration metric: took 394.784458ms to LocalClient.Create
	I0719 11:54:08.526267    3698 cache.go:157] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0719 11:54:08.526332    3698 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.064505458s
	I0719 11:54:08.526380    3698 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0719 11:54:08.526437    3698 cache.go:87] Successfully saved all images to host disk.
	I0719 11:54:09.266686    3698 start.go:128] duration metric: took 2.452269958s to createHost
	I0719 11:54:09.266735    3698 start.go:83] releasing machines lock for "test-preload-707000", held for 2.452702042s
	W0719 11:54:09.267042    3698 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-707000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-707000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:54:09.280422    3698 out.go:177] 
	W0719 11:54:09.284683    3698 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 11:54:09.284706    3698 out.go:239] * 
	* 
	W0719 11:54:09.287532    3698 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 11:54:09.299625    3698 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-707000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-19 11:54:09.317351 -0700 PDT m=+2474.649213501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-707000 -n test-preload-707000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-707000 -n test-preload-707000: exit status 7 (66.093ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-707000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-707000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-707000
--- FAIL: TestPreload (10.11s)

                                                
                                    
x
+
TestScheduledStopUnix (9.94s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-629000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-629000 --memory=2048 --driver=qemu2 : exit status 80 (9.797473875s)

                                                
                                                
-- stdout --
	* [scheduled-stop-629000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-629000" primary control-plane node in "scheduled-stop-629000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-629000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-629000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-629000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-629000" primary control-plane node in "scheduled-stop-629000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-629000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-629000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-19 11:54:19.257687 -0700 PDT m=+2484.589685584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-629000 -n scheduled-stop-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-629000 -n scheduled-stop-629000: exit status 7 (67.529583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-629000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-629000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-629000
--- FAIL: TestScheduledStopUnix (9.94s)

                                                
                                    
x
+
TestSkaffold (12.16s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1191257306 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1191257306 version: (1.061964209s)
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-265000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-265000 --memory=2600 --driver=qemu2 : exit status 80 (9.732132291s)

                                                
                                                
-- stdout --
	* [skaffold-265000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-265000" primary control-plane node in "skaffold-265000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-265000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-265000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-265000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-265000" primary control-plane node in "skaffold-265000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-265000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-265000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-19 11:54:31.422846 -0700 PDT m=+2496.755010417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-265000 -n skaffold-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-265000 -n skaffold-265000: exit status 7 (60.273ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-265000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-265000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-265000
--- FAIL: TestSkaffold (12.16s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (690.04s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1627442309 start -p running-upgrade-589000 --memory=2200 --vm-driver=qemu2 
E0719 11:56:49.517218    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1627442309 start -p running-upgrade-589000 --memory=2200 --vm-driver=qemu2 : exit status 90 (1m50.202246584s)

                                                
                                                
-- stdout --
	* [running-upgrade-589000] minikube v1.26.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/legacy_kubeconfig2486857308
	* Using the qemu2 (experimental) driver based on user configuration
	* Downloading VM boot image ...
	* minikube 1.33.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.33.1
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	* Starting control plane node running-upgrade-589000 in cluster running-upgrade-589000
	* Downloading Kubernetes v1.24.1 preload ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-19T18:57:03Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/cri-dockerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1627442309 start -p running-upgrade-589000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1627442309 start -p running-upgrade-589000 --memory=2200 --vm-driver=qemu2 : (43.351540125s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-589000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-589000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m21.007669208s)

                                                
                                                
-- stdout --
	* [running-upgrade-589000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-589000" primary control-plane node in "running-upgrade-589000" cluster
	* Updating the running qemu2 "running-upgrade-589000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:57:48.128158    4100 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:57:48.128287    4100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:57:48.128290    4100 out.go:304] Setting ErrFile to fd 2...
	I0719 11:57:48.128293    4100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:57:48.128433    4100 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:57:48.129502    4100 out.go:298] Setting JSON to false
	I0719 11:57:48.145638    4100 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3431,"bootTime":1721412037,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 11:57:48.145697    4100 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:57:48.150366    4100 out.go:177] * [running-upgrade-589000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 11:57:48.157241    4100 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 11:57:48.157317    4100 notify.go:220] Checking for updates...
	I0719 11:57:48.163161    4100 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:57:48.166208    4100 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 11:57:48.169188    4100 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:57:48.172171    4100 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 11:57:48.175183    4100 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 11:57:48.178416    4100 config.go:182] Loaded profile config "running-upgrade-589000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 11:57:48.181197    4100 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0719 11:57:48.184291    4100 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:57:48.188243    4100 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 11:57:48.195191    4100 start.go:297] selected driver: qemu2
	I0719 11:57:48.195197    4100 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-589000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50327 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-589000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0719 11:57:48.195243    4100 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 11:57:48.197324    4100 cni.go:84] Creating CNI manager for ""
	I0719 11:57:48.197338    4100 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 11:57:48.197373    4100 start.go:340] cluster config:
	{Name:running-upgrade-589000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50327 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-589000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0719 11:57:48.197425    4100 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:57:48.204196    4100 out.go:177] * Starting "running-upgrade-589000" primary control-plane node in "running-upgrade-589000" cluster
	I0719 11:57:48.208199    4100 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0719 11:57:48.208212    4100 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0719 11:57:48.208219    4100 cache.go:56] Caching tarball of preloaded images
	I0719 11:57:48.208259    4100 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 11:57:48.208264    4100 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0719 11:57:48.208309    4100 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000/config.json ...
	I0719 11:57:48.208684    4100 start.go:360] acquireMachinesLock for running-upgrade-589000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:57:48.208711    4100 start.go:364] duration metric: took 21.334µs to acquireMachinesLock for "running-upgrade-589000"
	I0719 11:57:48.208718    4100 start.go:96] Skipping create...Using existing machine configuration
	I0719 11:57:48.208722    4100 fix.go:54] fixHost starting: 
	I0719 11:57:48.209258    4100 fix.go:112] recreateIfNeeded on running-upgrade-589000: state=Running err=<nil>
	W0719 11:57:48.209266    4100 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 11:57:48.218154    4100 out.go:177] * Updating the running qemu2 "running-upgrade-589000" VM ...
	I0719 11:57:48.222041    4100 machine.go:94] provisionDockerMachine start ...
	I0719 11:57:48.222086    4100 main.go:141] libmachine: Using SSH client type: native
	I0719 11:57:48.222191    4100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104e92a10] 0x104e95270 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0719 11:57:48.222195    4100 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 11:57:48.280592    4100 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-589000
	
	I0719 11:57:48.280606    4100 buildroot.go:166] provisioning hostname "running-upgrade-589000"
	I0719 11:57:48.280643    4100 main.go:141] libmachine: Using SSH client type: native
	I0719 11:57:48.280751    4100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104e92a10] 0x104e95270 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0719 11:57:48.280756    4100 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-589000 && echo "running-upgrade-589000" | sudo tee /etc/hostname
	I0719 11:57:48.343027    4100 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-589000
	
	I0719 11:57:48.343069    4100 main.go:141] libmachine: Using SSH client type: native
	I0719 11:57:48.343170    4100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104e92a10] 0x104e95270 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0719 11:57:48.343178    4100 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-589000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-589000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-589000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 11:57:48.401359    4100 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 11:57:48.401370    4100 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19307-1066/.minikube CaCertPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19307-1066/.minikube}
	I0719 11:57:48.401376    4100 buildroot.go:174] setting up certificates
	I0719 11:57:48.401382    4100 provision.go:84] configureAuth start
	I0719 11:57:48.401386    4100 provision.go:143] copyHostCerts
	I0719 11:57:48.401436    4100 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1066/.minikube/ca.pem, removing ...
	I0719 11:57:48.401443    4100 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1066/.minikube/ca.pem
	I0719 11:57:48.401584    4100 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19307-1066/.minikube/ca.pem (1082 bytes)
	I0719 11:57:48.401755    4100 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1066/.minikube/cert.pem, removing ...
	I0719 11:57:48.401759    4100 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1066/.minikube/cert.pem
	I0719 11:57:48.401808    4100 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19307-1066/.minikube/cert.pem (1123 bytes)
	I0719 11:57:48.401909    4100 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1066/.minikube/key.pem, removing ...
	I0719 11:57:48.401912    4100 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1066/.minikube/key.pem
	I0719 11:57:48.401955    4100 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19307-1066/.minikube/key.pem (1679 bytes)
	I0719 11:57:48.402048    4100 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-589000 san=[127.0.0.1 localhost minikube running-upgrade-589000]
	I0719 11:57:48.478346    4100 provision.go:177] copyRemoteCerts
	I0719 11:57:48.478389    4100 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 11:57:48.478398    4100 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/running-upgrade-589000/id_rsa Username:docker}
	I0719 11:57:48.509343    4100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 11:57:48.516529    4100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0719 11:57:48.523453    4100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 11:57:48.530439    4100 provision.go:87] duration metric: took 129.050292ms to configureAuth
	I0719 11:57:48.530448    4100 buildroot.go:189] setting minikube options for container-runtime
	I0719 11:57:48.530561    4100 config.go:182] Loaded profile config "running-upgrade-589000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 11:57:48.530595    4100 main.go:141] libmachine: Using SSH client type: native
	I0719 11:57:48.530679    4100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104e92a10] 0x104e95270 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0719 11:57:48.530684    4100 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 11:57:48.588025    4100 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 11:57:48.588032    4100 buildroot.go:70] root file system type: tmpfs
	I0719 11:57:48.588083    4100 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 11:57:48.588123    4100 main.go:141] libmachine: Using SSH client type: native
	I0719 11:57:48.588222    4100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104e92a10] 0x104e95270 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0719 11:57:48.588254    4100 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 11:57:48.653696    4100 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 11:57:48.653746    4100 main.go:141] libmachine: Using SSH client type: native
	I0719 11:57:48.653851    4100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104e92a10] 0x104e95270 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0719 11:57:48.653860    4100 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 11:57:48.713718    4100 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 11:57:48.713730    4100 machine.go:97] duration metric: took 491.688625ms to provisionDockerMachine
	I0719 11:57:48.713736    4100 start.go:293] postStartSetup for "running-upgrade-589000" (driver="qemu2")
	I0719 11:57:48.713743    4100 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 11:57:48.713792    4100 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 11:57:48.713800    4100 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/running-upgrade-589000/id_rsa Username:docker}
	I0719 11:57:48.750665    4100 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 11:57:48.752405    4100 info.go:137] Remote host: Buildroot 2021.02.12
	I0719 11:57:48.752414    4100 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19307-1066/.minikube/addons for local assets ...
	I0719 11:57:48.752489    4100 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19307-1066/.minikube/files for local assets ...
	I0719 11:57:48.752596    4100 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19307-1066/.minikube/files/etc/ssl/certs/15652.pem -> 15652.pem in /etc/ssl/certs
	I0719 11:57:48.752686    4100 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 11:57:48.755554    4100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/files/etc/ssl/certs/15652.pem --> /etc/ssl/certs/15652.pem (1708 bytes)
	I0719 11:57:48.762072    4100 start.go:296] duration metric: took 48.331792ms for postStartSetup
	I0719 11:57:48.762087    4100 fix.go:56] duration metric: took 553.37225ms for fixHost
	I0719 11:57:48.762118    4100 main.go:141] libmachine: Using SSH client type: native
	I0719 11:57:48.762227    4100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104e92a10] 0x104e95270 <nil>  [] 0s} localhost 50256 <nil> <nil>}
	I0719 11:57:48.762232    4100 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0719 11:57:48.820948    4100 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721415468.812214699
	
	I0719 11:57:48.820958    4100 fix.go:216] guest clock: 1721415468.812214699
	I0719 11:57:48.820962    4100 fix.go:229] Guest: 2024-07-19 11:57:48.812214699 -0700 PDT Remote: 2024-07-19 11:57:48.762089 -0700 PDT m=+0.653146210 (delta=50.125699ms)
	I0719 11:57:48.820981    4100 fix.go:200] guest clock delta is within tolerance: 50.125699ms
	I0719 11:57:48.820983    4100 start.go:83] releasing machines lock for "running-upgrade-589000", held for 612.277125ms
	I0719 11:57:48.821044    4100 ssh_runner.go:195] Run: cat /version.json
	I0719 11:57:48.821058    4100 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/running-upgrade-589000/id_rsa Username:docker}
	I0719 11:57:48.821047    4100 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 11:57:48.821101    4100 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/running-upgrade-589000/id_rsa Username:docker}
	W0719 11:57:48.821696    4100 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50256: connect: connection refused
	I0719 11:57:48.821717    4100 retry.go:31] will retry after 193.511728ms: dial tcp [::1]:50256: connect: connection refused
	W0719 11:57:48.852569    4100 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0719 11:57:48.852617    4100 ssh_runner.go:195] Run: systemctl --version
	I0719 11:57:48.854591    4100 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 11:57:48.856203    4100 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 11:57:48.856229    4100 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0719 11:57:48.859012    4100 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0719 11:57:48.863244    4100 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 11:57:48.863253    4100 start.go:495] detecting cgroup driver to use...
	I0719 11:57:48.863318    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 11:57:48.868768    4100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0719 11:57:48.872267    4100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 11:57:48.875790    4100 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 11:57:48.875814    4100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 11:57:48.878847    4100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 11:57:48.881853    4100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 11:57:48.885204    4100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 11:57:48.888933    4100 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 11:57:48.892545    4100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 11:57:48.895560    4100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 11:57:48.898631    4100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 11:57:48.901613    4100 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 11:57:48.904705    4100 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 11:57:48.907329    4100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 11:57:48.990028    4100 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 11:57:48.999097    4100 start.go:495] detecting cgroup driver to use...
	I0719 11:57:48.999170    4100 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 11:57:49.005161    4100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 11:57:49.011208    4100 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 11:57:49.017769    4100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 11:57:49.025809    4100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 11:57:49.030696    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 11:57:49.036165    4100 ssh_runner.go:195] Run: which cri-dockerd
	I0719 11:57:49.037243    4100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 11:57:49.040404    4100 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 11:57:49.045224    4100 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 11:57:49.140709    4100 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 11:57:49.241307    4100 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 11:57:49.241369    4100 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 11:57:49.246860    4100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 11:57:49.335716    4100 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 11:57:50.908899    4100 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.573188125s)
	I0719 11:57:50.908976    4100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 11:57:50.913986    4100 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0719 11:57:50.920832    4100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 11:57:50.925664    4100 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 11:57:51.019155    4100 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 11:57:51.084973    4100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 11:57:51.174156    4100 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 11:57:51.180621    4100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 11:57:51.185522    4100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 11:57:51.272156    4100 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 11:57:51.310549    4100 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 11:57:51.310630    4100 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 11:57:51.313758    4100 start.go:563] Will wait 60s for crictl version
	I0719 11:57:51.313809    4100 ssh_runner.go:195] Run: which crictl
	I0719 11:57:51.315259    4100 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 11:57:51.326654    4100 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0719 11:57:51.326721    4100 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 11:57:51.338772    4100 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 11:57:51.358423    4100 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0719 11:57:51.358539    4100 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0719 11:57:51.359937    4100 kubeadm.go:883] updating cluster {Name:running-upgrade-589000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50327 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-589000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0719 11:57:51.359981    4100 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0719 11:57:51.360018    4100 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 11:57:51.370211    4100 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 11:57:51.370226    4100 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0719 11:57:51.370266    4100 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 11:57:51.373453    4100 ssh_runner.go:195] Run: which lz4
	I0719 11:57:51.374706    4100 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0719 11:57:51.375821    4100 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 11:57:51.375829    4100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0719 11:57:52.289190    4100 docker.go:649] duration metric: took 914.522417ms to copy over tarball
	I0719 11:57:52.289244    4100 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 11:57:53.429338    4100 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.1400955s)
	I0719 11:57:53.429353    4100 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 11:57:53.445399    4100 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 11:57:53.448816    4100 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0719 11:57:53.453989    4100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 11:57:53.541500    4100 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 11:57:54.743656    4100 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.202152583s)
	I0719 11:57:54.743761    4100 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 11:57:54.754740    4100 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 11:57:54.754751    4100 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0719 11:57:54.754757    4100 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 11:57:54.760812    4100 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 11:57:54.762714    4100 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0719 11:57:54.765024    4100 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0719 11:57:54.765031    4100 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 11:57:54.766381    4100 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 11:57:54.766581    4100 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0719 11:57:54.768448    4100 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0719 11:57:54.768454    4100 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0719 11:57:54.769511    4100 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 11:57:54.769557    4100 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0719 11:57:54.771121    4100 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0719 11:57:54.771140    4100 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0719 11:57:54.772404    4100 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 11:57:54.772531    4100 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0719 11:57:54.773454    4100 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0719 11:57:54.774381    4100 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 11:57:55.192092    4100 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0719 11:57:55.198940    4100 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 11:57:55.208644    4100 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0719 11:57:55.208677    4100 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0719 11:57:55.208732    4100 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0719 11:57:55.210379    4100 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0719 11:57:55.215670    4100 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0719 11:57:55.218890    4100 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0719 11:57:55.218908    4100 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 11:57:55.218946    4100 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 11:57:55.229173    4100 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0719 11:57:55.229413    4100 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0719 11:57:55.231604    4100 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0719 11:57:55.231621    4100 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0719 11:57:55.231661    4100 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0719 11:57:55.237997    4100 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0719 11:57:55.238018    4100 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0719 11:57:55.238073    4100 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0719 11:57:55.248121    4100 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0719 11:57:55.248141    4100 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0719 11:57:55.248199    4100 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0719 11:57:55.248215    4100 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0719 11:57:55.261602    4100 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0719 11:57:55.262813    4100 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0719 11:57:55.265477    4100 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0719 11:57:55.265580    4100 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0719 11:57:55.270102    4100 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0719 11:57:55.272768    4100 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0719 11:57:55.272882    4100 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0719 11:57:55.281026    4100 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0719 11:57:55.281049    4100 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0719 11:57:55.281061    4100 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0719 11:57:55.281080    4100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0719 11:57:55.281095    4100 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0719 11:57:55.288342    4100 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0719 11:57:55.288364    4100 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 11:57:55.288422    4100 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0719 11:57:55.317123    4100 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0719 11:57:55.317243    4100 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0719 11:57:55.321464    4100 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0719 11:57:55.321561    4100 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0719 11:57:55.333209    4100 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0719 11:57:55.333237    4100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0719 11:57:55.334930    4100 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0719 11:57:55.334944    4100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0719 11:57:55.351604    4100 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0719 11:57:55.351716    4100 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 11:57:55.356925    4100 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0719 11:57:55.356943    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0719 11:57:55.394565    4100 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0719 11:57:55.394587    4100 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 11:57:55.394644    4100 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 11:57:55.451834    4100 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0719 11:57:55.451856    4100 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0719 11:57:55.451861    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0719 11:57:56.057462    4100 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0719 11:57:56.057500    4100 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0719 11:57:56.057548    4100 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0719 11:57:56.057568    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0719 11:57:56.057792    4100 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0719 11:57:56.222143    4100 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0719 11:57:56.222190    4100 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0719 11:57:56.222216    4100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0719 11:57:56.248969    4100 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0719 11:57:56.248984    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0719 11:57:56.485357    4100 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0719 11:57:56.485395    4100 cache_images.go:92] duration metric: took 1.7306565s to LoadCachedImages
	W0719 11:57:56.485440    4100 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0719 11:57:56.485447    4100 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0719 11:57:56.485504    4100 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-589000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-589000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 11:57:56.485568    4100 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 11:57:56.499316    4100 cni.go:84] Creating CNI manager for ""
	I0719 11:57:56.499335    4100 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 11:57:56.499343    4100 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 11:57:56.499352    4100 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-589000 NodeName:running-upgrade-589000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 11:57:56.499426    4100 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-589000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 11:57:56.499664    4100 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0719 11:57:56.502566    4100 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 11:57:56.502597    4100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 11:57:56.505274    4100 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0719 11:57:56.509991    4100 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 11:57:56.514836    4100 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0719 11:57:56.519898    4100 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0719 11:57:56.521180    4100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 11:57:56.603057    4100 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 11:57:56.607864    4100 certs.go:68] Setting up /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000 for IP: 10.0.2.15
	I0719 11:57:56.607870    4100 certs.go:194] generating shared ca certs ...
	I0719 11:57:56.607878    4100 certs.go:226] acquiring lock for ca certs: {Name:mk315b805d576c08b7c87d345baabbe459ef4715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 11:57:56.608023    4100 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/ca.key
	I0719 11:57:56.608057    4100 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/proxy-client-ca.key
	I0719 11:57:56.608061    4100 certs.go:256] generating profile certs ...
	I0719 11:57:56.608133    4100 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000/client.key
	I0719 11:57:56.608150    4100 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000/apiserver.key.464c8b7d
	I0719 11:57:56.608162    4100 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000/apiserver.crt.464c8b7d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0719 11:57:56.699136    4100 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000/apiserver.crt.464c8b7d ...
	I0719 11:57:56.699148    4100 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000/apiserver.crt.464c8b7d: {Name:mkbccf4a6f8367fea5ffe358ef1d5364828abac6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 11:57:56.702552    4100 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000/apiserver.key.464c8b7d ...
	I0719 11:57:56.702567    4100 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000/apiserver.key.464c8b7d: {Name:mk51b90319664d26c8d0450fb1576b1a79dc528b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 11:57:56.702793    4100 certs.go:381] copying /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000/apiserver.crt.464c8b7d -> /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000/apiserver.crt
	I0719 11:57:56.702952    4100 certs.go:385] copying /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000/apiserver.key.464c8b7d -> /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000/apiserver.key
	I0719 11:57:56.703099    4100 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000/proxy-client.key
	I0719 11:57:56.703230    4100 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/1565.pem (1338 bytes)
	W0719 11:57:56.703260    4100 certs.go:480] ignoring /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/1565_empty.pem, impossibly tiny 0 bytes
	I0719 11:57:56.703267    4100 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 11:57:56.703291    4100 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem (1082 bytes)
	I0719 11:57:56.703309    4100 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem (1123 bytes)
	I0719 11:57:56.703326    4100 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/key.pem (1679 bytes)
	I0719 11:57:56.703365    4100 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/files/etc/ssl/certs/15652.pem (1708 bytes)
	I0719 11:57:56.703708    4100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 11:57:56.711719    4100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 11:57:56.718607    4100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 11:57:56.725302    4100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 11:57:56.734014    4100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 11:57:56.740960    4100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 11:57:56.748139    4100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 11:57:56.755408    4100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 11:57:56.762209    4100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/1565.pem --> /usr/share/ca-certificates/1565.pem (1338 bytes)
	I0719 11:57:56.768883    4100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/files/etc/ssl/certs/15652.pem --> /usr/share/ca-certificates/15652.pem (1708 bytes)
	I0719 11:57:56.776053    4100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 11:57:56.783594    4100 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 11:57:56.788952    4100 ssh_runner.go:195] Run: openssl version
	I0719 11:57:56.790857    4100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1565.pem && ln -fs /usr/share/ca-certificates/1565.pem /etc/ssl/certs/1565.pem"
	I0719 11:57:56.795015    4100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1565.pem
	I0719 11:57:56.796566    4100 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:20 /usr/share/ca-certificates/1565.pem
	I0719 11:57:56.796598    4100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1565.pem
	I0719 11:57:56.798602    4100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1565.pem /etc/ssl/certs/51391683.0"
	I0719 11:57:56.801814    4100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15652.pem && ln -fs /usr/share/ca-certificates/15652.pem /etc/ssl/certs/15652.pem"
	I0719 11:57:56.805095    4100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15652.pem
	I0719 11:57:56.806680    4100 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:20 /usr/share/ca-certificates/15652.pem
	I0719 11:57:56.806698    4100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15652.pem
	I0719 11:57:56.808664    4100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15652.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 11:57:56.812073    4100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 11:57:56.815175    4100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 11:57:56.816727    4100 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0719 11:57:56.816746    4100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 11:57:56.818888    4100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 11:57:56.821967    4100 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 11:57:56.823608    4100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 11:57:56.825417    4100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 11:57:56.827230    4100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 11:57:56.829013    4100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 11:57:56.830923    4100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 11:57:56.832769    4100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 11:57:56.834508    4100 kubeadm.go:392] StartCluster: {Name:running-upgrade-589000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50327 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-589000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0719 11:57:56.834571    4100 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 11:57:56.845115    4100 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 11:57:56.848842    4100 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 11:57:56.848848    4100 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 11:57:56.848869    4100 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 11:57:56.852085    4100 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 11:57:56.852330    4100 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-589000" does not appear in /Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:57:56.852386    4100 kubeconfig.go:62] /Users/jenkins/minikube-integration/19307-1066/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-589000" cluster setting kubeconfig missing "running-upgrade-589000" context setting]
	I0719 11:57:56.852547    4100 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/kubeconfig: {Name:mk4dabaac160a2c10ee03f7aa88bffdd6270bcb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 11:57:56.853213    4100 kapi.go:59] client config for running-upgrade-589000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000/client.key", CAFile:"/Users/jenkins/minikube-integration/19307-1066/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106227790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 11:57:56.853546    4100 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 11:57:56.856962    4100 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-589000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0719 11:57:56.856974    4100 kubeadm.go:1160] stopping kube-system containers ...
	I0719 11:57:56.857015    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 11:57:56.868420    4100 docker.go:483] Stopping containers: [e1f20b2a5a53 cd79b33fb4aa 213784f515d6 a3c72963ab49 3ed1a881f9e2 4af0fa5b107c b8f4445650ff 1af47df9430d 0815be60ea7f f31786b5b796]
	I0719 11:57:56.868501    4100 ssh_runner.go:195] Run: docker stop e1f20b2a5a53 cd79b33fb4aa 213784f515d6 a3c72963ab49 3ed1a881f9e2 4af0fa5b107c b8f4445650ff 1af47df9430d 0815be60ea7f f31786b5b796
	I0719 11:57:56.879656    4100 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 11:57:56.965821    4100 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 11:57:56.969649    4100 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Jul 19 18:57 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Jul 19 18:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 19 18:57 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Jul 19 18:57 /etc/kubernetes/scheduler.conf
	
	I0719 11:57:56.969679    4100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/admin.conf
	I0719 11:57:56.973051    4100 kubeadm.go:163] "https://control-plane.minikube.internal:50327" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0719 11:57:56.973074    4100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 11:57:56.976331    4100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/kubelet.conf
	I0719 11:57:56.979478    4100 kubeadm.go:163] "https://control-plane.minikube.internal:50327" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0719 11:57:56.979504    4100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 11:57:56.982186    4100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/controller-manager.conf
	I0719 11:57:56.985034    4100 kubeadm.go:163] "https://control-plane.minikube.internal:50327" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0719 11:57:56.985055    4100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 11:57:56.988123    4100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/scheduler.conf
	I0719 11:57:56.990683    4100 kubeadm.go:163] "https://control-plane.minikube.internal:50327" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0719 11:57:56.990709    4100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 11:57:56.993317    4100 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 11:57:56.996513    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 11:57:57.017243    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 11:57:57.499332    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 11:57:57.699479    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 11:57:57.724531    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 11:57:57.743684    4100 api_server.go:52] waiting for apiserver process to appear ...
	I0719 11:57:57.743763    4100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 11:57:58.244833    4100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 11:57:58.745837    4100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 11:57:58.749877    4100 api_server.go:72] duration metric: took 1.006209416s to wait for apiserver process to appear ...
	I0719 11:57:58.749886    4100 api_server.go:88] waiting for apiserver healthz status ...
	I0719 11:57:58.749894    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:58:03.752017    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:58:03.752054    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:58:08.752370    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:58:08.752489    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:58:13.753461    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:58:13.753563    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:58:18.754731    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:58:18.754811    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:58:23.756440    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:58:23.756523    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:58:28.758462    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:58:28.758552    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:58:33.759963    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:58:33.760040    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:58:38.762665    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:58:38.762704    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:58:43.764889    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:58:43.764940    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:58:48.767415    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:58:48.767500    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:58:53.768390    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:58:53.768414    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:58:58.769026    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:58:58.769130    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 11:58:58.784629    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 11:58:58.784719    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 11:58:58.796052    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 11:58:58.796137    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 11:58:58.807943    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 11:58:58.808027    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 11:58:58.823070    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 11:58:58.823150    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 11:58:58.835801    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 11:58:58.835864    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 11:58:58.847432    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 11:58:58.847504    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 11:58:58.857872    4100 logs.go:276] 0 containers: []
	W0719 11:58:58.857887    4100 logs.go:278] No container was found matching "kindnet"
	I0719 11:58:58.857939    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 11:58:58.872461    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 11:58:58.872478    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 11:58:58.872484    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 11:58:58.889854    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 11:58:58.889867    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 11:58:58.910404    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 11:58:58.910415    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 11:58:58.951463    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 11:58:58.951474    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 11:58:58.974282    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 11:58:58.974296    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 11:58:58.992077    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 11:58:58.992089    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 11:58:59.007150    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 11:58:59.007162    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 11:58:59.023606    4100 logs.go:123] Gathering logs for Docker ...
	I0719 11:58:59.023619    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 11:58:59.050389    4100 logs.go:123] Gathering logs for container status ...
	I0719 11:58:59.050397    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 11:58:59.062672    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 11:58:59.062683    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 11:58:59.067136    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 11:58:59.067147    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 11:58:59.162850    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 11:58:59.162862    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 11:58:59.175784    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 11:58:59.175796    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 11:58:59.187982    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 11:58:59.187996    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 11:58:59.212263    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 11:58:59.212273    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 11:58:59.226769    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 11:58:59.226778    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 11:59:01.739588    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:59:06.741841    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:59:06.741993    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 11:59:06.753404    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 11:59:06.753470    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 11:59:06.764833    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 11:59:06.764904    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 11:59:06.781227    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 11:59:06.781327    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 11:59:06.792748    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 11:59:06.792816    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 11:59:06.803954    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 11:59:06.804022    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 11:59:06.815097    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 11:59:06.815163    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 11:59:06.825936    4100 logs.go:276] 0 containers: []
	W0719 11:59:06.825946    4100 logs.go:278] No container was found matching "kindnet"
	I0719 11:59:06.826003    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 11:59:06.836421    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 11:59:06.836436    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 11:59:06.836442    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 11:59:06.851621    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 11:59:06.851631    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 11:59:06.866403    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 11:59:06.866415    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 11:59:06.878734    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 11:59:06.878745    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 11:59:06.893278    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 11:59:06.893290    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 11:59:06.907984    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 11:59:06.907994    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 11:59:06.928963    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 11:59:06.928979    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 11:59:06.947288    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 11:59:06.947302    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 11:59:06.958674    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 11:59:06.958685    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 11:59:06.973481    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 11:59:06.973495    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 11:59:06.985384    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 11:59:06.985397    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 11:59:07.003012    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 11:59:07.003023    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 11:59:07.007704    4100 logs.go:123] Gathering logs for container status ...
	I0719 11:59:07.007711    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 11:59:07.019495    4100 logs.go:123] Gathering logs for Docker ...
	I0719 11:59:07.019510    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 11:59:07.045969    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 11:59:07.045976    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 11:59:07.083293    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 11:59:07.083303    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 11:59:09.622949    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:59:14.624653    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:59:14.625073    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 11:59:14.659824    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 11:59:14.659989    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 11:59:14.681777    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 11:59:14.681880    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 11:59:14.696249    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 11:59:14.696311    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 11:59:14.708156    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 11:59:14.708242    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 11:59:14.719006    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 11:59:14.719089    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 11:59:14.729580    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 11:59:14.729656    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 11:59:14.739890    4100 logs.go:276] 0 containers: []
	W0719 11:59:14.739902    4100 logs.go:278] No container was found matching "kindnet"
	I0719 11:59:14.739954    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 11:59:14.750442    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 11:59:14.750473    4100 logs.go:123] Gathering logs for Docker ...
	I0719 11:59:14.750482    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 11:59:14.776672    4100 logs.go:123] Gathering logs for container status ...
	I0719 11:59:14.776681    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 11:59:14.788725    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 11:59:14.788738    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 11:59:14.800612    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 11:59:14.800622    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 11:59:14.817998    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 11:59:14.818009    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 11:59:14.829579    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 11:59:14.829588    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 11:59:14.844432    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 11:59:14.844444    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 11:59:14.885195    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 11:59:14.885208    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 11:59:14.899260    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 11:59:14.899273    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 11:59:14.920659    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 11:59:14.920674    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 11:59:14.934905    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 11:59:14.934919    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 11:59:14.950088    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 11:59:14.950102    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 11:59:14.987653    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 11:59:14.987661    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 11:59:15.005864    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 11:59:15.005873    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 11:59:15.017279    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 11:59:15.017292    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 11:59:15.034133    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 11:59:15.034143    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 11:59:17.540396    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:59:22.541822    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:59:22.542199    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 11:59:22.576154    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 11:59:22.576278    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 11:59:22.604014    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 11:59:22.604091    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 11:59:22.617047    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 11:59:22.617123    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 11:59:22.631988    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 11:59:22.632056    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 11:59:22.650647    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 11:59:22.650718    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 11:59:22.661239    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 11:59:22.661309    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 11:59:22.671376    4100 logs.go:276] 0 containers: []
	W0719 11:59:22.671388    4100 logs.go:278] No container was found matching "kindnet"
	I0719 11:59:22.671449    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 11:59:22.682113    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 11:59:22.682133    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 11:59:22.682139    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 11:59:22.696059    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 11:59:22.696070    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 11:59:22.707682    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 11:59:22.707692    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 11:59:22.718958    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 11:59:22.718967    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 11:59:22.758863    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 11:59:22.758872    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 11:59:22.762985    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 11:59:22.762992    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 11:59:22.777242    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 11:59:22.777255    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 11:59:22.791473    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 11:59:22.791487    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 11:59:22.827131    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 11:59:22.827143    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 11:59:22.841166    4100 logs.go:123] Gathering logs for Docker ...
	I0719 11:59:22.841176    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 11:59:22.867632    4100 logs.go:123] Gathering logs for container status ...
	I0719 11:59:22.867646    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 11:59:22.879547    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 11:59:22.879559    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 11:59:22.900558    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 11:59:22.900568    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 11:59:22.915713    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 11:59:22.915725    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 11:59:22.930718    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 11:59:22.930731    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 11:59:22.944554    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 11:59:22.944567    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 11:59:25.464615    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:59:30.466842    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:59:30.467029    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 11:59:30.479203    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 11:59:30.479277    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 11:59:30.490288    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 11:59:30.490362    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 11:59:30.501257    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 11:59:30.501325    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 11:59:30.511728    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 11:59:30.511793    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 11:59:30.522420    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 11:59:30.522488    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 11:59:30.532730    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 11:59:30.532790    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 11:59:30.547536    4100 logs.go:276] 0 containers: []
	W0719 11:59:30.547548    4100 logs.go:278] No container was found matching "kindnet"
	I0719 11:59:30.547605    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 11:59:30.563420    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 11:59:30.563436    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 11:59:30.563440    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 11:59:30.574961    4100 logs.go:123] Gathering logs for Docker ...
	I0719 11:59:30.574970    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 11:59:30.600902    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 11:59:30.600909    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 11:59:30.605164    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 11:59:30.605173    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 11:59:30.620207    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 11:59:30.620218    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 11:59:30.634844    4100 logs.go:123] Gathering logs for container status ...
	I0719 11:59:30.634855    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 11:59:30.646670    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 11:59:30.646681    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 11:59:30.667668    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 11:59:30.667681    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 11:59:30.706124    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 11:59:30.706134    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 11:59:30.724330    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 11:59:30.724341    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 11:59:30.746505    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 11:59:30.746524    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 11:59:30.760730    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 11:59:30.760740    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 11:59:30.771837    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 11:59:30.771848    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 11:59:30.791322    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 11:59:30.791334    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 11:59:30.808732    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 11:59:30.808741    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 11:59:30.846766    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 11:59:30.846779    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 11:59:33.363351    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:59:38.365571    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:59:38.365741    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 11:59:38.378427    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 11:59:38.378521    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 11:59:38.389592    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 11:59:38.389669    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 11:59:38.401206    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 11:59:38.401280    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 11:59:38.412005    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 11:59:38.412075    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 11:59:38.422138    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 11:59:38.422199    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 11:59:38.432869    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 11:59:38.432937    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 11:59:38.444256    4100 logs.go:276] 0 containers: []
	W0719 11:59:38.444267    4100 logs.go:278] No container was found matching "kindnet"
	I0719 11:59:38.444319    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 11:59:38.456569    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 11:59:38.456591    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 11:59:38.456597    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 11:59:38.478020    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 11:59:38.478036    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 11:59:38.492873    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 11:59:38.492887    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 11:59:38.508444    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 11:59:38.508460    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 11:59:38.520296    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 11:59:38.520307    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 11:59:38.525289    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 11:59:38.525296    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 11:59:38.543586    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 11:59:38.543599    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 11:59:38.581061    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 11:59:38.581076    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 11:59:38.592695    4100 logs.go:123] Gathering logs for container status ...
	I0719 11:59:38.592707    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 11:59:38.604978    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 11:59:38.604990    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 11:59:38.623996    4100 logs.go:123] Gathering logs for Docker ...
	I0719 11:59:38.624009    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 11:59:38.650107    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 11:59:38.650115    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 11:59:38.690022    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 11:59:38.690030    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 11:59:38.704122    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 11:59:38.704132    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 11:59:38.718700    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 11:59:38.718710    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 11:59:38.736319    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 11:59:38.736330    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 11:59:41.250540    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:59:46.251407    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:59:46.251490    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 11:59:46.268638    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 11:59:46.268702    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 11:59:46.280603    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 11:59:46.280671    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 11:59:46.291508    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 11:59:46.291572    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 11:59:46.302950    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 11:59:46.303014    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 11:59:46.319597    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 11:59:46.319659    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 11:59:46.330152    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 11:59:46.330211    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 11:59:46.340424    4100 logs.go:276] 0 containers: []
	W0719 11:59:46.340437    4100 logs.go:278] No container was found matching "kindnet"
	I0719 11:59:46.340492    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 11:59:46.350818    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 11:59:46.350835    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 11:59:46.350840    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 11:59:46.365197    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 11:59:46.365208    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 11:59:46.376868    4100 logs.go:123] Gathering logs for Docker ...
	I0719 11:59:46.376878    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 11:59:46.403429    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 11:59:46.403439    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 11:59:46.425526    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 11:59:46.425537    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 11:59:46.450058    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 11:59:46.450073    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 11:59:46.465420    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 11:59:46.465435    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 11:59:46.504792    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 11:59:46.504800    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 11:59:46.516286    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 11:59:46.516298    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 11:59:46.530988    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 11:59:46.531001    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 11:59:46.548661    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 11:59:46.548670    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 11:59:46.560740    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 11:59:46.560755    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 11:59:46.565485    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 11:59:46.565491    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 11:59:46.579701    4100 logs.go:123] Gathering logs for container status ...
	I0719 11:59:46.579712    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 11:59:46.591550    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 11:59:46.591566    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 11:59:46.627641    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 11:59:46.627652    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 11:59:49.143408    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:59:54.145595    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:59:54.145789    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 11:59:54.172534    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 11:59:54.172644    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 11:59:54.189676    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 11:59:54.189746    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 11:59:54.204192    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 11:59:54.204267    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 11:59:54.220572    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 11:59:54.220646    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 11:59:54.232196    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 11:59:54.232260    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 11:59:54.244200    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 11:59:54.244264    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 11:59:54.255973    4100 logs.go:276] 0 containers: []
	W0719 11:59:54.255988    4100 logs.go:278] No container was found matching "kindnet"
	I0719 11:59:54.256059    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 11:59:54.269255    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 11:59:54.269276    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 11:59:54.269285    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 11:59:54.310669    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 11:59:54.310682    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 11:59:54.327460    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 11:59:54.327472    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 11:59:54.342878    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 11:59:54.342889    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 11:59:54.369673    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 11:59:54.369689    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 11:59:54.387291    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 11:59:54.387308    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 11:59:54.402809    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 11:59:54.402824    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 11:59:54.415692    4100 logs.go:123] Gathering logs for container status ...
	I0719 11:59:54.415705    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 11:59:54.428988    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 11:59:54.429001    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 11:59:54.470736    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 11:59:54.470757    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 11:59:54.486426    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 11:59:54.486441    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 11:59:54.502247    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 11:59:54.502262    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 11:59:54.517701    4100 logs.go:123] Gathering logs for Docker ...
	I0719 11:59:54.517715    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 11:59:54.543899    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 11:59:54.543916    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 11:59:54.549233    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 11:59:54.549271    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 11:59:54.562431    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 11:59:54.562446    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 11:59:57.085188    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:02.086864    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:02.087013    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:00:02.103619    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:00:02.103722    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:00:02.116509    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:00:02.116574    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:00:02.128058    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:00:02.128123    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:00:02.138413    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:00:02.138476    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:00:02.148711    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:00:02.148775    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:00:02.160860    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:00:02.160930    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:00:02.170970    4100 logs.go:276] 0 containers: []
	W0719 12:00:02.170980    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:00:02.171027    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:00:02.181495    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:00:02.181514    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:00:02.181519    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:00:02.186383    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:00:02.186389    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:00:02.207194    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:00:02.207205    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:00:02.222584    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:00:02.222595    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:00:02.234129    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:00:02.234140    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:00:02.252264    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:00:02.252275    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:00:02.267633    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:00:02.267644    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:00:02.293141    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:00:02.293152    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:00:02.333594    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:00:02.333606    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:00:02.348384    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:00:02.348395    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:00:02.362853    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:00:02.362864    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:00:02.380030    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:00:02.380041    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:00:02.394075    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:00:02.394087    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:00:02.408657    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:00:02.408668    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:00:02.449526    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:00:02.449550    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:00:02.471486    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:00:02.471497    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:00:04.986320    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:09.988664    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:09.988818    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:00:10.000390    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:00:10.000459    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:00:10.010906    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:00:10.010979    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:00:10.021324    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:00:10.021390    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:00:10.032331    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:00:10.032401    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:00:10.042600    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:00:10.042672    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:00:10.057280    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:00:10.057349    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:00:10.067517    4100 logs.go:276] 0 containers: []
	W0719 12:00:10.067529    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:00:10.067587    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:00:10.078541    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:00:10.078557    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:00:10.078562    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:00:10.092857    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:00:10.092868    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:00:10.104929    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:00:10.104946    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:00:10.109686    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:00:10.109691    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:00:10.130724    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:00:10.130736    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:00:10.148179    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:00:10.148190    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:00:10.162705    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:00:10.162717    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:00:10.202692    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:00:10.202706    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:00:10.238746    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:00:10.238758    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:00:10.263654    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:00:10.263665    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:00:10.275235    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:00:10.275248    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:00:10.287765    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:00:10.287780    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:00:10.304618    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:00:10.304628    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:00:10.326496    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:00:10.326507    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:00:10.338479    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:00:10.338490    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:00:10.352539    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:00:10.352549    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:00:12.869088    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:17.871351    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:17.871509    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:00:17.887671    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:00:17.887761    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:00:17.900204    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:00:17.900281    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:00:17.911905    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:00:17.911976    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:00:17.922796    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:00:17.922865    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:00:17.939194    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:00:17.939262    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:00:17.951737    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:00:17.951809    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:00:17.962043    4100 logs.go:276] 0 containers: []
	W0719 12:00:17.962061    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:00:17.962119    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:00:17.972089    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:00:17.972106    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:00:17.972112    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:00:17.976507    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:00:17.976513    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:00:18.013730    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:00:18.013741    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:00:18.028193    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:00:18.028205    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:00:18.043174    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:00:18.043186    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:00:18.070363    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:00:18.070379    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:00:18.084784    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:00:18.084798    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:00:18.099213    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:00:18.099222    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:00:18.113262    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:00:18.113272    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:00:18.124272    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:00:18.124284    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:00:18.148819    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:00:18.148829    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:00:18.189281    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:00:18.189289    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:00:18.200505    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:00:18.200517    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:00:18.224302    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:00:18.224312    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:00:18.238400    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:00:18.238414    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:00:18.250763    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:00:18.250775    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:00:20.766789    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:25.769501    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:25.769925    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:00:25.813466    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:00:25.813606    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:00:25.834883    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:00:25.834986    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:00:25.852576    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:00:25.852652    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:00:25.864742    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:00:25.864811    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:00:25.875527    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:00:25.875607    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:00:25.886127    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:00:25.886206    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:00:25.896775    4100 logs.go:276] 0 containers: []
	W0719 12:00:25.896784    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:00:25.896844    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:00:25.908774    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:00:25.908797    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:00:25.908804    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:00:25.950142    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:00:25.950165    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:00:25.964719    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:00:25.964733    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:00:25.980911    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:00:25.980924    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:00:26.002186    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:00:26.002196    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:00:26.023873    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:00:26.023884    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:00:26.049241    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:00:26.049254    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:00:26.093487    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:00:26.093498    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:00:26.108430    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:00:26.108439    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:00:26.128697    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:00:26.128708    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:00:26.140709    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:00:26.140720    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:00:26.154696    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:00:26.154707    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:00:26.166262    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:00:26.166271    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:00:26.178153    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:00:26.178163    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:00:26.184385    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:00:26.184393    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:00:26.198803    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:00:26.198814    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:00:28.714678    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:33.717049    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:33.717467    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:00:33.767328    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:00:33.767434    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:00:33.784480    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:00:33.784558    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:00:33.797161    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:00:33.797236    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:00:33.811135    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:00:33.811212    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:00:33.822956    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:00:33.823029    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:00:33.834555    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:00:33.834619    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:00:33.844948    4100 logs.go:276] 0 containers: []
	W0719 12:00:33.844961    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:00:33.845018    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:00:33.856050    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:00:33.856069    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:00:33.856075    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:00:33.868157    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:00:33.868168    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:00:33.882627    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:00:33.882638    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:00:33.898541    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:00:33.898551    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:00:33.910841    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:00:33.910851    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:00:33.931127    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:00:33.931137    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:00:33.968548    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:00:33.968555    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:00:33.972903    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:00:33.972913    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:00:33.987589    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:00:33.987600    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:00:34.009887    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:00:34.009898    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:00:34.026504    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:00:34.026516    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:00:34.044871    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:00:34.044884    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:00:34.073055    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:00:34.073067    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:00:34.085401    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:00:34.085413    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:00:34.120698    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:00:34.120709    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:00:34.142005    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:00:34.142015    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:00:36.667606    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:41.670064    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:41.670287    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:00:41.693768    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:00:41.693860    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:00:41.708715    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:00:41.708802    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:00:41.721068    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:00:41.721129    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:00:41.731901    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:00:41.731968    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:00:41.742850    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:00:41.742907    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:00:41.753743    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:00:41.753803    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:00:41.763457    4100 logs.go:276] 0 containers: []
	W0719 12:00:41.763467    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:00:41.763514    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:00:41.774337    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:00:41.774356    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:00:41.774362    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:00:41.779153    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:00:41.779159    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:00:41.815348    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:00:41.815359    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:00:41.829463    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:00:41.829473    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:00:41.844274    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:00:41.844284    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:00:41.855114    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:00:41.855126    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:00:41.867187    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:00:41.867198    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:00:41.891937    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:00:41.891946    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:00:41.932221    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:00:41.932231    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:00:41.958035    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:00:41.958045    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:00:41.972186    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:00:41.972197    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:00:41.987392    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:00:41.987405    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:00:42.000350    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:00:42.000360    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:00:42.019097    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:00:42.019107    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:00:42.034381    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:00:42.034391    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:00:42.050979    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:00:42.050989    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:00:44.564801    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:49.567052    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:49.567320    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:00:49.593283    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:00:49.593406    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:00:49.611100    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:00:49.611181    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:00:49.624251    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:00:49.624311    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:00:49.635741    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:00:49.635803    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:00:49.646190    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:00:49.646258    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:00:49.657482    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:00:49.657544    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:00:49.668078    4100 logs.go:276] 0 containers: []
	W0719 12:00:49.668089    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:00:49.668143    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:00:49.683963    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:00:49.683980    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:00:49.683986    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:00:49.708276    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:00:49.708285    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:00:49.721953    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:00:49.721965    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:00:49.734213    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:00:49.734227    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:00:49.751991    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:00:49.752004    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:00:49.764168    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:00:49.764180    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:00:49.800229    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:00:49.800240    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:00:49.814606    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:00:49.814615    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:00:49.829562    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:00:49.829572    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:00:49.833914    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:00:49.833919    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:00:49.847571    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:00:49.847582    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:00:49.862054    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:00:49.862068    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:00:49.877047    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:00:49.877058    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:00:49.894700    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:00:49.894713    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:00:49.909854    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:00:49.909864    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:00:49.948058    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:00:49.948067    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:00:52.470915    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:57.473182    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:57.473374    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:00:57.485519    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:00:57.485604    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:00:57.495806    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:00:57.495872    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:00:57.506034    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:00:57.506104    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:00:57.517128    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:00:57.517192    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:00:57.527591    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:00:57.527663    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:00:57.538321    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:00:57.538386    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:00:57.548895    4100 logs.go:276] 0 containers: []
	W0719 12:00:57.548906    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:00:57.548958    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:00:57.559562    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:00:57.559578    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:00:57.559583    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:00:57.576678    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:00:57.576688    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:00:57.581097    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:00:57.581107    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:00:57.617479    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:00:57.617503    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:00:57.639473    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:00:57.639484    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:00:57.653344    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:00:57.653356    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:00:57.667533    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:00:57.667543    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:00:57.682646    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:00:57.682657    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:00:57.694996    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:00:57.695007    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:00:57.707431    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:00:57.707439    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:00:57.732377    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:00:57.732394    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:00:57.772368    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:00:57.772377    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:00:57.794385    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:00:57.794399    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:00:57.816429    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:00:57.816445    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:00:57.828384    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:00:57.828396    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:00:57.842927    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:00:57.842938    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:01:00.355401    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:05.357718    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:05.358064    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:05.388676    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:01:05.388803    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:05.406938    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:01:05.407021    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:05.420790    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:01:05.420864    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:05.432427    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:01:05.432497    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:05.442897    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:01:05.442967    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:05.453789    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:01:05.453867    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:05.463878    4100 logs.go:276] 0 containers: []
	W0719 12:01:05.463889    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:05.463945    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:05.476323    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:01:05.476340    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:05.476346    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:05.515078    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:01:05.515090    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:05.526935    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:01:05.526949    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:01:05.541265    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:05.541277    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:05.578748    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:01:05.578760    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:01:05.592150    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:01:05.592161    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:01:05.606602    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:01:05.606614    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:01:05.621070    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:01:05.621083    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:01:05.638281    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:01:05.638290    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:01:05.649732    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:05.649742    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:05.654582    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:01:05.654589    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:01:05.675565    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:01:05.675578    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:01:05.690004    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:01:05.690014    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:01:05.707017    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:01:05.707027    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:01:05.718782    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:01:05.718793    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:01:05.733461    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:05.733473    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:08.260327    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:13.262639    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:13.262886    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:13.288268    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:01:13.288364    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:13.304968    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:01:13.305047    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:13.317864    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:01:13.317935    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:13.329369    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:01:13.329447    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:13.339747    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:01:13.339811    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:13.352813    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:01:13.352880    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:13.362854    4100 logs.go:276] 0 containers: []
	W0719 12:01:13.362866    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:13.362925    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:13.373445    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:01:13.373461    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:01:13.373467    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:01:13.385369    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:13.385380    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:13.408928    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:13.408937    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:13.446474    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:01:13.446482    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:01:13.466861    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:01:13.466871    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:01:13.481420    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:13.481432    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:13.486407    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:01:13.486413    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:01:13.500490    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:01:13.500500    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:01:13.516239    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:01:13.516249    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:01:13.530790    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:01:13.530801    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:01:13.542217    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:01:13.542228    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:13.554190    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:01:13.554201    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:01:13.568311    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:01:13.568322    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:01:13.583421    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:01:13.583432    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:01:13.603156    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:01:13.603171    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:01:13.617359    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:13.617371    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:16.153735    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:21.156577    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:21.157027    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:21.198667    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:01:21.198834    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:21.226557    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:01:21.226658    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:21.240970    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:01:21.241040    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:21.252825    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:01:21.252899    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:21.263853    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:01:21.263924    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:21.274831    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:01:21.274903    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:21.289984    4100 logs.go:276] 0 containers: []
	W0719 12:01:21.289995    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:21.290059    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:21.308400    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:01:21.308417    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:01:21.308424    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:01:21.323794    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:01:21.323804    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:01:21.335075    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:01:21.335086    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:01:21.350294    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:21.350304    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:21.392387    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:21.392413    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:21.397301    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:21.397312    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:21.439834    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:01:21.439845    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:01:21.462092    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:01:21.462105    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:01:21.476278    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:01:21.476292    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:01:21.490803    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:01:21.490813    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:01:21.505235    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:01:21.505250    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:01:21.516852    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:21.516861    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:21.540725    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:01:21.540733    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:01:21.551928    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:01:21.551940    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:01:21.569795    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:01:21.569809    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:21.581803    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:01:21.581816    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:01:24.099362    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:29.101567    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:29.101778    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:29.125094    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:01:29.125220    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:29.141813    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:01:29.141895    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:29.163042    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:01:29.163107    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:29.174129    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:01:29.174203    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:29.186584    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:01:29.186651    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:29.198076    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:01:29.198144    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:29.208479    4100 logs.go:276] 0 containers: []
	W0719 12:01:29.208491    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:29.208548    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:29.219134    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:01:29.219157    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:29.219164    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:29.258273    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:29.258283    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:29.263033    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:29.263041    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:29.297712    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:01:29.297724    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:01:29.313217    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:01:29.313228    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:01:29.333740    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:01:29.333753    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:01:29.348697    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:29.348708    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:29.373160    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:01:29.373167    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:01:29.387045    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:01:29.387055    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:01:29.398765    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:01:29.398775    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:01:29.413151    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:01:29.413164    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:01:29.427518    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:01:29.427530    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:01:29.445629    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:01:29.445642    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:01:29.457322    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:01:29.457335    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:01:29.472709    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:01:29.472719    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:01:29.487813    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:01:29.487822    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:32.001510    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:37.003888    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:37.004306    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:37.039122    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:01:37.039257    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:37.058691    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:01:37.058788    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:37.073502    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:01:37.073579    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:37.085728    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:01:37.085804    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:37.101336    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:01:37.101405    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:37.112061    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:01:37.112128    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:37.122472    4100 logs.go:276] 0 containers: []
	W0719 12:01:37.122488    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:37.122553    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:37.133274    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:01:37.133304    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:37.133312    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:37.169239    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:01:37.169254    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:01:37.183548    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:01:37.183559    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:01:37.198838    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:01:37.198848    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:01:37.213445    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:01:37.213455    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:01:37.230585    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:37.230597    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:37.254407    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:37.254415    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:37.293777    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:01:37.293787    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:01:37.314577    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:01:37.314586    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:01:37.326384    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:01:37.326396    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:01:37.341414    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:01:37.341423    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:37.354707    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:37.354720    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:37.359486    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:01:37.359493    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:01:37.377761    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:01:37.377770    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:01:37.392750    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:01:37.392759    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:01:37.404431    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:01:37.404440    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:01:39.917882    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:44.920061    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:44.920224    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:44.933321    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:01:44.933400    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:44.945102    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:01:44.945170    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:44.955438    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:01:44.955502    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:44.965373    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:01:44.965442    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:44.976473    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:01:44.976541    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:44.986282    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:01:44.986347    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:44.996339    4100 logs.go:276] 0 containers: []
	W0719 12:01:44.996349    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:44.996401    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:45.006629    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:01:45.006650    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:45.006655    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:45.041585    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:01:45.041599    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:01:45.056262    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:01:45.056272    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:01:45.078393    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:01:45.078403    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:01:45.093026    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:01:45.093039    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:01:45.105741    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:45.105753    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:45.110968    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:01:45.110975    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:01:45.130341    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:01:45.130352    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:01:45.146823    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:01:45.146832    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:01:45.164913    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:01:45.164924    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:01:45.178732    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:01:45.178743    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:01:45.196253    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:01:45.196264    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:01:45.210837    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:45.210846    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:45.235006    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:45.235013    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:45.274880    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:01:45.274888    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:45.286636    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:01:45.286647    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:01:47.800584    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:52.803176    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:52.803381    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:52.815091    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:01:52.815161    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:52.830718    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:01:52.830792    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:52.841556    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:01:52.841624    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:52.852174    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:01:52.852245    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:52.862846    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:01:52.862911    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:52.873746    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:01:52.873820    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:52.888397    4100 logs.go:276] 0 containers: []
	W0719 12:01:52.888409    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:52.888466    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:52.898549    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:01:52.898568    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:01:52.898573    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:01:52.913165    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:52.913174    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:52.935716    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:01:52.935725    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:01:52.947759    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:52.947772    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:52.985967    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:01:52.985975    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:01:53.008328    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:01:53.008338    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:01:53.028688    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:01:53.028699    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:01:53.043193    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:53.043205    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:53.047820    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:01:53.047830    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:01:53.065341    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:01:53.065351    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:53.077089    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:01:53.077100    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:01:53.089067    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:01:53.089079    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:01:53.103664    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:53.103676    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:53.139725    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:01:53.139733    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:01:53.154062    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:01:53.154075    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:01:53.165599    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:01:53.165611    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:01:55.682251    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:00.684633    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:00.684706    4100 kubeadm.go:597] duration metric: took 4m3.839195333s to restartPrimaryControlPlane
	W0719 12:02:00.684775    4100 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 12:02:00.684805    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0719 12:02:01.621544    4100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 12:02:01.626645    4100 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 12:02:01.629556    4100 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 12:02:01.632219    4100 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 12:02:01.632226    4100 kubeadm.go:157] found existing configuration files:
	
	I0719 12:02:01.632250    4100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/admin.conf
	I0719 12:02:01.634733    4100 kubeadm.go:163] "https://control-plane.minikube.internal:50327" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 12:02:01.634752    4100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 12:02:01.638561    4100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/kubelet.conf
	I0719 12:02:01.641755    4100 kubeadm.go:163] "https://control-plane.minikube.internal:50327" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 12:02:01.641776    4100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 12:02:01.644739    4100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/controller-manager.conf
	I0719 12:02:01.647186    4100 kubeadm.go:163] "https://control-plane.minikube.internal:50327" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 12:02:01.647211    4100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 12:02:01.650294    4100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/scheduler.conf
	I0719 12:02:01.653361    4100 kubeadm.go:163] "https://control-plane.minikube.internal:50327" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 12:02:01.653382    4100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 12:02:01.656163    4100 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 12:02:01.676017    4100 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0719 12:02:01.676056    4100 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 12:02:01.728280    4100 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 12:02:01.728346    4100 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 12:02:01.728401    4100 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 12:02:01.776606    4100 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 12:02:01.779778    4100 out.go:204]   - Generating certificates and keys ...
	I0719 12:02:01.779819    4100 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 12:02:01.779853    4100 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 12:02:01.779911    4100 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 12:02:01.779948    4100 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 12:02:01.779986    4100 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 12:02:01.780013    4100 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 12:02:01.780043    4100 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 12:02:01.780072    4100 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 12:02:01.780111    4100 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 12:02:01.780149    4100 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 12:02:01.780173    4100 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 12:02:01.780204    4100 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 12:02:01.917982    4100 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 12:02:02.010414    4100 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 12:02:02.073142    4100 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 12:02:02.110641    4100 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 12:02:02.139936    4100 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 12:02:02.140251    4100 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 12:02:02.140380    4100 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 12:02:02.226511    4100 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 12:02:02.229794    4100 out.go:204]   - Booting up control plane ...
	I0719 12:02:02.230022    4100 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 12:02:02.230090    4100 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 12:02:02.230213    4100 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 12:02:02.230341    4100 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 12:02:02.230630    4100 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 12:02:06.732847    4100 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502250 seconds
	I0719 12:02:06.732912    4100 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 12:02:06.736247    4100 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 12:02:07.262564    4100 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 12:02:07.263034    4100 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-589000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 12:02:07.766778    4100 kubeadm.go:310] [bootstrap-token] Using token: g0ch5u.y4j1a027fyhiu0zl
	I0719 12:02:07.769874    4100 out.go:204]   - Configuring RBAC rules ...
	I0719 12:02:07.769930    4100 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 12:02:07.769977    4100 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 12:02:07.773599    4100 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 12:02:07.774525    4100 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 12:02:07.775511    4100 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 12:02:07.776575    4100 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 12:02:07.779601    4100 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 12:02:07.952464    4100 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 12:02:08.171622    4100 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 12:02:08.171996    4100 kubeadm.go:310] 
	I0719 12:02:08.172024    4100 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 12:02:08.172028    4100 kubeadm.go:310] 
	I0719 12:02:08.172065    4100 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 12:02:08.172072    4100 kubeadm.go:310] 
	I0719 12:02:08.172084    4100 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 12:02:08.172120    4100 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 12:02:08.172147    4100 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 12:02:08.172150    4100 kubeadm.go:310] 
	I0719 12:02:08.172177    4100 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 12:02:08.172181    4100 kubeadm.go:310] 
	I0719 12:02:08.172215    4100 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 12:02:08.172220    4100 kubeadm.go:310] 
	I0719 12:02:08.172248    4100 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 12:02:08.172289    4100 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 12:02:08.172334    4100 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 12:02:08.172339    4100 kubeadm.go:310] 
	I0719 12:02:08.172382    4100 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 12:02:08.172419    4100 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 12:02:08.172424    4100 kubeadm.go:310] 
	I0719 12:02:08.172463    4100 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g0ch5u.y4j1a027fyhiu0zl \
	I0719 12:02:08.172521    4100 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:154bfbfa8204c5f233c55b5b534249e4595df04d661a5e7d0c1a65adbcc691d1 \
	I0719 12:02:08.172536    4100 kubeadm.go:310] 	--control-plane 
	I0719 12:02:08.172540    4100 kubeadm.go:310] 
	I0719 12:02:08.172579    4100 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 12:02:08.172584    4100 kubeadm.go:310] 
	I0719 12:02:08.172625    4100 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g0ch5u.y4j1a027fyhiu0zl \
	I0719 12:02:08.172685    4100 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:154bfbfa8204c5f233c55b5b534249e4595df04d661a5e7d0c1a65adbcc691d1 
	I0719 12:02:08.172748    4100 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 12:02:08.172756    4100 cni.go:84] Creating CNI manager for ""
	I0719 12:02:08.172765    4100 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 12:02:08.183178    4100 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 12:02:08.186423    4100 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 12:02:08.189480    4100 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 12:02:08.194594    4100 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 12:02:08.194650    4100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 12:02:08.194654    4100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-589000 minikube.k8s.io/updated_at=2024_07_19T12_02_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=running-upgrade-589000 minikube.k8s.io/primary=true
	I0719 12:02:08.197751    4100 ops.go:34] apiserver oom_adj: -16
	I0719 12:02:08.254627    4100 kubeadm.go:1113] duration metric: took 60.011333ms to wait for elevateKubeSystemPrivileges
	I0719 12:02:08.254736    4100 kubeadm.go:394] duration metric: took 4m11.423679834s to StartCluster
	I0719 12:02:08.254749    4100 settings.go:142] acquiring lock: {Name:mk67411000c671a58f92dc65eb422ba28279f174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:02:08.254840    4100 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 12:02:08.255208    4100 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/kubeconfig: {Name:mk4dabaac160a2c10ee03f7aa88bffdd6270bcb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:02:08.255415    4100 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:02:08.255507    4100 config.go:182] Loaded profile config "running-upgrade-589000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 12:02:08.255440    4100 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 12:02:08.255541    4100 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-589000"
	I0719 12:02:08.255541    4100 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-589000"
	I0719 12:02:08.255567    4100 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-589000"
	W0719 12:02:08.255571    4100 addons.go:243] addon storage-provisioner should already be in state true
	I0719 12:02:08.255555    4100 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-589000"
	I0719 12:02:08.255581    4100 host.go:66] Checking if "running-upgrade-589000" exists ...
	I0719 12:02:08.256525    4100 kapi.go:59] client config for running-upgrade-589000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000/client.key", CAFile:"/Users/jenkins/minikube-integration/19307-1066/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106227790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 12:02:08.256639    4100 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-589000"
	W0719 12:02:08.256644    4100 addons.go:243] addon default-storageclass should already be in state true
	I0719 12:02:08.256651    4100 host.go:66] Checking if "running-upgrade-589000" exists ...
	I0719 12:02:08.259189    4100 out.go:177] * Verifying Kubernetes components...
	I0719 12:02:08.259494    4100 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 12:02:08.263456    4100 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 12:02:08.263464    4100 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/running-upgrade-589000/id_rsa Username:docker}
	I0719 12:02:08.267213    4100 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 12:02:08.271237    4100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:02:08.275266    4100 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 12:02:08.275273    4100 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 12:02:08.275278    4100 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/running-upgrade-589000/id_rsa Username:docker}
	I0719 12:02:08.370030    4100 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 12:02:08.375623    4100 api_server.go:52] waiting for apiserver process to appear ...
	I0719 12:02:08.375662    4100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 12:02:08.382426    4100 api_server.go:72] duration metric: took 127.00025ms to wait for apiserver process to appear ...
	I0719 12:02:08.382437    4100 api_server.go:88] waiting for apiserver healthz status ...
	I0719 12:02:08.382446    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:08.398532    4100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 12:02:08.415623    4100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 12:02:13.383111    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:13.383167    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:18.384408    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:18.384451    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:23.384617    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:23.384636    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:28.384836    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:28.384863    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:33.385188    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:33.385239    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:38.386036    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:38.386060    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0719 12:02:38.708452    4100 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0719 12:02:38.712658    4100 out.go:177] * Enabled addons: storage-provisioner
	I0719 12:02:38.720571    4100 addons.go:510] duration metric: took 30.465557167s for enable addons: enabled=[storage-provisioner]
	I0719 12:02:43.386719    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:43.386768    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:48.387461    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:48.387509    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:53.388615    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:53.388648    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:58.390053    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:58.390089    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:03.391838    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:03.391894    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:08.394126    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:08.394247    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:08.415316    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:03:08.415391    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:08.427090    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:03:08.427153    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:08.437780    4100 logs.go:276] 2 containers: [15a4c09ca72b 18f4d73ea1b1]
	I0719 12:03:08.437850    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:08.447669    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:03:08.447731    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:08.460784    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:03:08.460867    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:08.471307    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:03:08.471373    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:08.481343    4100 logs.go:276] 0 containers: []
	W0719 12:03:08.481358    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:08.481413    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:08.491522    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:03:08.491537    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:08.491542    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:08.524905    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:03:08.524917    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:03:08.538344    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:03:08.538357    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:03:08.549987    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:03:08.549999    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:03:08.562026    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:08.562036    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:08.585490    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:08.585499    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:08.590011    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:08.590019    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:08.629390    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:03:08.629402    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:03:08.644008    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:03:08.644020    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:03:08.658101    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:03:08.658114    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:03:08.669652    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:03:08.669662    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:03:08.684599    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:03:08.684611    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:03:08.702162    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:03:08.702172    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:11.216045    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:16.218834    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:16.219043    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:16.239648    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:03:16.239776    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:16.254652    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:03:16.254727    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:16.267114    4100 logs.go:276] 2 containers: [15a4c09ca72b 18f4d73ea1b1]
	I0719 12:03:16.267185    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:16.278475    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:03:16.278541    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:16.289171    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:03:16.289243    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:16.299115    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:03:16.299171    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:16.310153    4100 logs.go:276] 0 containers: []
	W0719 12:03:16.310169    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:16.310335    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:16.321613    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:03:16.321629    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:03:16.321634    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:03:16.335913    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:03:16.335929    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:03:16.350441    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:03:16.350456    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:03:16.362646    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:03:16.362659    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:03:16.374433    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:16.374447    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:16.398125    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:16.398133    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:16.431400    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:16.431407    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:16.435970    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:16.435978    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:16.470553    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:03:16.470562    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:03:16.484466    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:03:16.484481    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:03:16.496209    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:03:16.496220    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:03:16.507405    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:03:16.507415    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:03:16.525249    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:03:16.525259    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:19.038788    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:24.041194    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:24.041553    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:24.084611    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:03:24.084716    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:24.103835    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:03:24.103913    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:24.115594    4100 logs.go:276] 2 containers: [15a4c09ca72b 18f4d73ea1b1]
	I0719 12:03:24.115666    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:24.126632    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:03:24.126703    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:24.137671    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:03:24.137740    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:24.148006    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:03:24.148069    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:24.158239    4100 logs.go:276] 0 containers: []
	W0719 12:03:24.158250    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:24.158304    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:24.168986    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:03:24.169002    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:03:24.169008    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:03:24.182783    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:03:24.182794    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:03:24.194693    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:03:24.194708    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:03:24.207281    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:03:24.207290    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:03:24.222292    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:03:24.222303    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:03:24.234008    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:24.234021    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:24.269507    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:24.269515    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:24.274229    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:24.274238    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:24.309374    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:24.309385    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:24.335409    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:03:24.335421    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:24.347016    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:03:24.347028    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:03:24.362131    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:03:24.362146    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:03:24.374217    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:03:24.374230    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:03:26.893507    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:31.895233    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:31.895404    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:31.908661    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:03:31.908746    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:31.920333    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:03:31.920397    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:31.930184    4100 logs.go:276] 2 containers: [15a4c09ca72b 18f4d73ea1b1]
	I0719 12:03:31.930253    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:31.940801    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:03:31.940863    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:31.953929    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:03:31.953999    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:31.964717    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:03:31.964779    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:31.974903    4100 logs.go:276] 0 containers: []
	W0719 12:03:31.974913    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:31.974962    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:31.985741    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:03:31.985756    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:31.985761    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:32.021036    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:03:32.021052    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:03:32.035102    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:03:32.035115    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:03:32.050562    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:03:32.050574    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:03:32.062278    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:03:32.062289    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:03:32.079392    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:32.079403    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:32.104311    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:32.104318    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:32.139325    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:32.139333    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:32.143834    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:03:32.143842    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:03:32.154969    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:03:32.154982    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:03:32.173361    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:03:32.173375    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:32.184491    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:03:32.184501    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:03:32.199563    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:03:32.199576    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:03:34.713176    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:39.715366    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:39.715516    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:39.729914    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:03:39.729990    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:39.741424    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:03:39.741486    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:39.756181    4100 logs.go:276] 2 containers: [15a4c09ca72b 18f4d73ea1b1]
	I0719 12:03:39.756252    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:39.766860    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:03:39.766925    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:39.777546    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:03:39.777612    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:39.792271    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:03:39.792334    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:39.802865    4100 logs.go:276] 0 containers: []
	W0719 12:03:39.802881    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:39.802943    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:39.813643    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:03:39.813661    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:39.813666    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:39.847167    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:39.847178    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:39.852111    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:03:39.852118    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:03:39.864312    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:03:39.864322    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:03:39.876368    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:39.876379    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:39.900249    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:39.900262    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:39.934039    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:03:39.934051    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:03:39.951992    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:03:39.952000    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:03:39.965932    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:03:39.965944    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:03:39.977854    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:03:39.977865    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:03:39.992937    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:03:39.992954    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:03:40.011153    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:03:40.011163    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:03:40.023421    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:03:40.023432    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:42.537081    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:47.539628    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:47.539993    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:47.573749    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:03:47.573882    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:47.594610    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:03:47.594702    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:47.608665    4100 logs.go:276] 2 containers: [15a4c09ca72b 18f4d73ea1b1]
	I0719 12:03:47.608741    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:47.620819    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:03:47.620886    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:47.637270    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:03:47.637340    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:47.647642    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:03:47.647714    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:47.657879    4100 logs.go:276] 0 containers: []
	W0719 12:03:47.657894    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:47.657948    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:47.668745    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:03:47.668781    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:47.668787    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:47.673507    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:47.673514    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:47.767883    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:03:47.767897    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:03:47.781799    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:03:47.781810    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:03:47.793146    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:03:47.793157    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:03:47.804854    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:03:47.804868    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:47.817848    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:47.817861    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:47.854673    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:03:47.854688    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:03:47.869532    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:03:47.869544    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:03:47.881668    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:03:47.881681    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:03:47.896564    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:03:47.896577    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:03:47.908617    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:03:47.908630    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:03:47.926412    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:47.926423    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:50.453525    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:55.456197    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:55.456615    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:55.491492    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:03:55.491627    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:55.512708    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:03:55.512799    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:55.529998    4100 logs.go:276] 2 containers: [15a4c09ca72b 18f4d73ea1b1]
	I0719 12:03:55.530076    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:55.542293    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:03:55.542364    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:55.554948    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:03:55.555012    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:55.565949    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:03:55.566016    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:55.580652    4100 logs.go:276] 0 containers: []
	W0719 12:03:55.580665    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:55.580722    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:55.591317    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:03:55.591331    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:03:55.591339    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:03:55.603147    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:03:55.603159    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:03:55.618274    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:03:55.618284    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:03:55.630389    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:03:55.630399    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:03:55.642473    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:55.642487    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:55.667563    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:03:55.667574    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:55.678774    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:55.678784    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:55.716242    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:55.716250    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:55.754616    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:03:55.754629    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:03:55.769014    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:03:55.769026    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:03:55.780459    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:03:55.780470    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:03:55.799715    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:55.799725    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:55.804377    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:03:55.804385    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:03:58.321417    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:03.323572    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:03.323732    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:04:03.334845    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:04:03.334914    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:04:03.346724    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:04:03.346805    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:04:03.358064    4100 logs.go:276] 2 containers: [15a4c09ca72b 18f4d73ea1b1]
	I0719 12:04:03.358137    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:04:03.373415    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:04:03.373482    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:04:03.383939    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:04:03.384012    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:04:03.394763    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:04:03.394830    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:04:03.405609    4100 logs.go:276] 0 containers: []
	W0719 12:04:03.405623    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:04:03.405679    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:04:03.416266    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:04:03.416281    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:04:03.416286    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:04:03.428262    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:04:03.428273    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:04:03.453336    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:04:03.453349    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:04:03.466021    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:04:03.466031    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:04:03.501897    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:04:03.501904    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:04:03.506733    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:04:03.506739    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:04:03.518648    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:04:03.518659    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:04:03.531780    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:04:03.531790    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:04:03.543358    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:04:03.543373    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:04:03.559144    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:04:03.559154    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:04:03.576771    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:04:03.576785    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:04:03.618680    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:04:03.618691    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:04:03.633612    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:04:03.633621    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:04:06.149569    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:11.151696    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:11.151806    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:04:11.165498    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:04:11.165576    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:04:11.177392    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:04:11.177454    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:04:11.188083    4100 logs.go:276] 2 containers: [15a4c09ca72b 18f4d73ea1b1]
	I0719 12:04:11.188153    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:04:11.198530    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:04:11.198592    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:04:11.209490    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:04:11.209552    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:04:11.219673    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:04:11.219744    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:04:11.230407    4100 logs.go:276] 0 containers: []
	W0719 12:04:11.230419    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:04:11.230473    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:04:11.241314    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:04:11.241329    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:04:11.241334    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:04:11.253102    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:04:11.253112    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:04:11.267759    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:04:11.267772    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:04:11.291201    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:04:11.291211    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:04:11.324588    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:04:11.324599    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:04:11.338931    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:04:11.338941    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:04:11.360713    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:04:11.360724    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:04:11.372115    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:04:11.372125    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:04:11.389521    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:04:11.389531    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:04:11.405034    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:04:11.405047    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:04:11.416601    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:04:11.416613    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:04:11.421328    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:04:11.421338    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:04:11.456952    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:04:11.456965    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:04:13.977217    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:18.979493    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:18.979636    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:04:18.991130    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:04:18.991223    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:04:19.002695    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:04:19.002765    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:04:19.013891    4100 logs.go:276] 2 containers: [15a4c09ca72b 18f4d73ea1b1]
	I0719 12:04:19.013962    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:04:19.025479    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:04:19.025544    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:04:19.036992    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:04:19.037069    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:04:19.052828    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:04:19.052902    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:04:19.064722    4100 logs.go:276] 0 containers: []
	W0719 12:04:19.064734    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:04:19.064796    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:04:19.076411    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:04:19.076428    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:04:19.076434    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:04:19.088967    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:04:19.088980    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:04:19.101640    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:04:19.101652    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:04:19.114397    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:04:19.114409    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:04:19.154215    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:04:19.154228    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:04:19.169722    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:04:19.169737    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:04:19.184175    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:04:19.184187    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:04:19.196331    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:04:19.196346    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:04:19.214428    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:04:19.214441    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:04:19.239086    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:04:19.239097    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:04:19.272340    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:04:19.272352    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:04:19.276783    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:04:19.276790    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:04:19.290115    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:04:19.290128    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:04:21.807328    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:26.809661    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:26.810001    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:04:26.842189    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:04:26.842302    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:04:26.862221    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:04:26.862309    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:04:26.875699    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:04:26.875777    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:04:26.886677    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:04:26.886743    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:04:26.897453    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:04:26.897513    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:04:26.907552    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:04:26.907615    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:04:26.917946    4100 logs.go:276] 0 containers: []
	W0719 12:04:26.917957    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:04:26.918007    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:04:26.928588    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:04:26.928605    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:04:26.928611    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:04:26.940048    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:04:26.940061    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:04:26.960592    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:04:26.960605    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:04:26.973069    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:04:26.973080    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:04:26.978186    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:04:26.978195    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:04:26.989746    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:04:26.989757    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:04:27.004999    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:04:27.005012    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:04:27.016413    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:04:27.016424    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:04:27.028501    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:04:27.028512    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:04:27.063342    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:04:27.063352    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:04:27.086315    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:04:27.086324    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:04:27.103785    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:04:27.103795    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:04:27.117970    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:04:27.117983    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:04:27.132743    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:04:27.132754    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:04:27.156247    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:04:27.156257    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:04:29.695175    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:34.697573    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:34.697832    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:04:34.725733    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:04:34.725862    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:04:34.745816    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:04:34.745899    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:04:34.759240    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:04:34.759314    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:04:34.770500    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:04:34.770569    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:04:34.781404    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:04:34.781475    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:04:34.792293    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:04:34.792354    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:04:34.802841    4100 logs.go:276] 0 containers: []
	W0719 12:04:34.802855    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:04:34.802915    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:04:34.819924    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:04:34.819942    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:04:34.819947    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:04:34.845252    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:04:34.845263    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:04:34.859701    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:04:34.859713    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:04:34.875436    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:04:34.875452    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:04:34.887729    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:04:34.887741    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:04:34.905364    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:04:34.905374    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:04:34.943344    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:04:34.943357    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:04:34.948195    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:04:34.948205    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:04:34.962462    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:04:34.962477    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:04:34.973977    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:04:34.973988    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:04:34.985692    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:04:34.985702    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:04:34.996873    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:04:34.996889    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:04:35.030482    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:04:35.030497    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:04:35.048630    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:04:35.048642    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:04:35.060429    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:04:35.060444    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:04:37.574223    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:42.576923    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:42.577153    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:04:42.599078    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:04:42.599199    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:04:42.615328    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:04:42.615403    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:04:42.630494    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:04:42.630567    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:04:42.642065    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:04:42.642133    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:04:42.652977    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:04:42.653046    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:04:42.664797    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:04:42.664855    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:04:42.675414    4100 logs.go:276] 0 containers: []
	W0719 12:04:42.675425    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:04:42.675485    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:04:42.693094    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:04:42.693111    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:04:42.693117    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:04:42.707039    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:04:42.707050    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:04:42.742208    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:04:42.742216    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:04:42.746935    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:04:42.746940    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:04:42.758806    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:04:42.758815    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:04:42.770440    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:04:42.770450    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:04:42.789261    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:04:42.789275    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:04:42.800755    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:04:42.800765    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:04:42.814187    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:04:42.814199    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:04:42.826360    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:04:42.826370    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:04:42.838290    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:04:42.838309    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:04:42.856242    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:04:42.856257    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:04:42.881189    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:04:42.881196    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:04:42.892337    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:04:42.892347    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:04:42.928008    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:04:42.928020    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:04:45.440739    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:50.442963    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:50.443131    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:04:50.457018    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:04:50.457100    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:04:50.468389    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:04:50.468455    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:04:50.480002    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:04:50.480071    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:04:50.494006    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:04:50.494064    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:04:50.504211    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:04:50.504268    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:04:50.514899    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:04:50.514969    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:04:50.525152    4100 logs.go:276] 0 containers: []
	W0719 12:04:50.525163    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:04:50.525221    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:04:50.535775    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:04:50.535792    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:04:50.535799    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:04:50.540856    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:04:50.540862    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:04:50.551975    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:04:50.551986    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:04:50.575605    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:04:50.575613    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:04:50.592929    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:04:50.592941    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:04:50.610277    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:04:50.610291    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:04:50.622412    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:04:50.622425    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:04:50.634296    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:04:50.634307    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:04:50.649375    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:04:50.649391    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:04:50.661661    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:04:50.661675    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:04:50.672912    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:04:50.672922    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:04:50.707376    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:04:50.707384    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:04:50.742243    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:04:50.742254    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:04:50.756556    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:04:50.756566    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:04:50.770555    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:04:50.770568    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:04:53.284524    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:58.287082    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:58.287246    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:04:58.301236    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:04:58.301319    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:04:58.312717    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:04:58.312787    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:04:58.329072    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:04:58.329143    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:04:58.339857    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:04:58.339925    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:04:58.353698    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:04:58.353766    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:04:58.368506    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:04:58.368572    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:04:58.379531    4100 logs.go:276] 0 containers: []
	W0719 12:04:58.379541    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:04:58.379597    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:04:58.390088    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:04:58.390103    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:04:58.390109    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:04:58.395383    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:04:58.395392    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:04:58.432551    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:04:58.432562    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:04:58.451821    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:04:58.451832    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:04:58.463506    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:04:58.463517    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:04:58.496237    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:04:58.496244    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:04:58.507913    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:04:58.507923    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:04:58.520184    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:04:58.520194    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:04:58.535438    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:04:58.535448    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:04:58.549250    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:04:58.549259    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:04:58.561818    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:04:58.561827    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:04:58.579331    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:04:58.579342    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:04:58.603207    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:04:58.603215    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:04:58.615556    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:04:58.615566    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:04:58.627765    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:04:58.627779    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:05:01.142226    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:06.142797    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:06.142936    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:06.157690    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:05:06.157772    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:06.170357    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:05:06.170426    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:06.192248    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:05:06.192320    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:06.204105    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:05:06.204176    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:06.214609    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:05:06.214666    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:06.225498    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:05:06.225566    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:06.236527    4100 logs.go:276] 0 containers: []
	W0719 12:05:06.236538    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:06.236598    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:06.254543    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:05:06.254561    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:06.254566    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:06.290301    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:05:06.290321    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:05:06.302826    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:06.302837    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:06.327105    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:05:06.327114    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:06.338211    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:05:06.338224    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:05:06.351767    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:05:06.351780    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:05:06.363627    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:05:06.363637    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:05:06.381375    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:05:06.381388    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:05:06.393087    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:05:06.393096    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:05:06.408334    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:06.408344    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:06.412764    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:05:06.412770    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:05:06.427109    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:05:06.427122    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:05:06.443488    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:06.443499    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:06.478459    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:05:06.478470    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:05:06.492872    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:05:06.492883    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:05:09.006831    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:14.009170    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:14.009392    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:14.038919    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:05:14.039033    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:14.058625    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:05:14.058697    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:14.072176    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:05:14.072255    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:14.083735    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:05:14.083801    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:14.094205    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:05:14.094270    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:14.109529    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:05:14.109592    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:14.119588    4100 logs.go:276] 0 containers: []
	W0719 12:05:14.119608    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:14.119667    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:14.130631    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:05:14.130648    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:05:14.130653    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:05:14.142995    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:14.143004    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:14.167983    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:05:14.167990    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:05:14.180166    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:14.180177    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:14.184804    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:05:14.184814    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:05:14.196818    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:05:14.196830    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:05:14.208695    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:05:14.208708    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:05:14.226176    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:05:14.226186    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:14.245004    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:14.245014    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:14.280419    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:05:14.280428    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:05:14.292083    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:14.292094    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:14.328195    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:05:14.328209    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:05:14.345893    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:05:14.345905    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:05:14.361978    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:05:14.361990    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:05:14.378904    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:05:14.378915    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:05:16.897658    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:21.899874    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:21.900039    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:21.917328    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:05:21.917418    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:21.931366    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:05:21.931437    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:21.943177    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:05:21.943246    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:21.954193    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:05:21.954265    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:21.964932    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:05:21.965001    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:21.975618    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:05:21.975686    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:21.985947    4100 logs.go:276] 0 containers: []
	W0719 12:05:21.985959    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:21.986018    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:21.996637    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:05:21.996651    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:21.996656    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:22.031278    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:05:22.031290    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:05:22.044989    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:05:22.044999    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:22.057504    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:05:22.057518    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:05:22.073601    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:05:22.073614    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:05:22.084974    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:22.084984    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:22.120272    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:05:22.120283    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:05:22.132684    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:05:22.132694    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:05:22.144860    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:05:22.144871    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:05:22.162898    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:22.162909    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:22.186215    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:22.186222    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:22.190846    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:05:22.190852    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:05:22.205240    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:05:22.205255    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:05:22.216494    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:05:22.216504    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:05:22.232121    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:05:22.232131    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:05:24.745871    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:29.748214    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:29.748332    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:29.759869    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:05:29.759939    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:29.770920    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:05:29.770986    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:29.781900    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:05:29.781971    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:29.792738    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:05:29.792804    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:29.803294    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:05:29.803355    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:29.813873    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:05:29.813938    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:29.828693    4100 logs.go:276] 0 containers: []
	W0719 12:05:29.828706    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:29.828757    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:29.839178    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:05:29.839195    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:05:29.839200    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:05:29.854860    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:05:29.854874    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:29.866731    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:05:29.866744    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:05:29.882025    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:05:29.882036    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:05:29.900227    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:29.900238    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:29.935894    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:29.935916    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:29.971544    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:05:29.971559    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:05:29.983666    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:05:29.983680    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:05:29.995354    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:05:29.995364    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:05:30.011299    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:30.011310    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:30.036479    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:05:30.036486    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:05:30.056797    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:05:30.056808    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:05:30.070592    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:05:30.070604    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:05:30.084588    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:30.084598    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:30.089148    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:05:30.089156    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:05:32.602687    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:37.604905    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:37.605018    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:37.618320    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:05:37.618390    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:37.629884    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:05:37.629952    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:37.641251    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:05:37.641328    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:37.652558    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:05:37.652622    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:37.663332    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:05:37.663403    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:37.674255    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:05:37.674332    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:37.683614    4100 logs.go:276] 0 containers: []
	W0719 12:05:37.683624    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:37.683672    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:37.694143    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:05:37.694161    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:05:37.694167    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:05:37.711668    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:05:37.711680    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:05:37.723866    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:37.723877    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:37.748407    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:05:37.748414    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:05:37.763001    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:05:37.763014    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:05:37.777497    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:05:37.777506    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:05:37.789192    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:05:37.789203    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:05:37.800809    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:05:37.800824    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:05:37.812467    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:37.812477    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:37.817137    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:05:37.817143    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:37.828761    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:37.828771    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:37.864513    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:37.864527    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:37.900314    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:05:37.900325    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:05:37.912524    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:05:37.912535    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:05:37.934295    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:05:37.934305    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:05:40.451675    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:45.453968    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:45.454121    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:45.491890    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:05:45.491968    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:45.509947    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:05:45.510033    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:45.523393    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:05:45.523466    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:45.540026    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:05:45.540086    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:45.550659    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:05:45.550726    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:45.561433    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:05:45.561489    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:45.572051    4100 logs.go:276] 0 containers: []
	W0719 12:05:45.572061    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:45.572109    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:45.582476    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:05:45.582493    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:05:45.582500    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:05:45.602036    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:05:45.602047    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:05:45.613555    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:05:45.613566    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:05:45.627337    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:05:45.627350    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:05:45.642880    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:05:45.642892    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:05:45.660911    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:05:45.660923    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:05:45.672212    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:45.672223    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:45.694942    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:45.694951    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:45.728163    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:45.728171    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:45.732504    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:05:45.732513    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:05:45.744665    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:05:45.744674    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:05:45.759576    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:45.759587    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:45.796935    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:05:45.796947    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:05:45.819224    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:05:45.819235    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:05:45.831740    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:05:45.831752    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:48.347758    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:53.349842    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:53.350056    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:53.372422    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:05:53.372504    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:53.384921    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:05:53.384992    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:53.395965    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:05:53.396034    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:53.407226    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:05:53.407286    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:53.417474    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:05:53.417540    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:53.427506    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:05:53.427577    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:53.437746    4100 logs.go:276] 0 containers: []
	W0719 12:05:53.437759    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:53.437817    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:53.449790    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:05:53.449807    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:53.449812    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:53.485314    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:05:53.485339    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:05:53.497912    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:05:53.497927    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:05:53.509405    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:05:53.509416    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:05:53.524299    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:05:53.524313    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:05:53.536014    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:53.536032    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:53.540821    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:05:53.540839    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:05:53.554955    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:53.554964    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:53.579363    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:05:53.579371    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:05:53.594351    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:05:53.594361    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:05:53.612148    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:05:53.612165    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:53.623966    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:53.623979    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:53.659954    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:05:53.659966    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:05:53.674249    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:05:53.674260    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:05:53.685868    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:05:53.685879    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:05:56.199442    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:06:01.201830    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:06:01.202062    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:06:01.222678    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:06:01.222771    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:06:01.238133    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:06:01.238213    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:06:01.250729    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:06:01.250801    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:06:01.261292    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:06:01.261354    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:06:01.272411    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:06:01.272475    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:06:01.286501    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:06:01.286570    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:06:01.298420    4100 logs.go:276] 0 containers: []
	W0719 12:06:01.298430    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:06:01.298484    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:06:01.309087    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:06:01.309103    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:06:01.309110    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:06:01.314158    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:06:01.314165    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:06:01.329089    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:06:01.329102    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:06:01.340941    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:06:01.340953    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:06:01.355028    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:06:01.355040    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:06:01.367233    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:06:01.367246    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:06:01.385029    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:06:01.385039    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:06:01.396733    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:06:01.396750    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:06:01.408349    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:06:01.408360    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:06:01.420211    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:06:01.420224    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:06:01.432498    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:06:01.432509    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:06:01.465779    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:06:01.465787    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:06:01.501514    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:06:01.501527    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:06:01.516640    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:06:01.516649    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:06:01.532878    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:06:01.532889    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:06:04.059477    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:06:09.061759    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:06:09.066404    4100 out.go:177] 
	W0719 12:06:09.070203    4100 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0719 12:06:09.070214    4100 out.go:239] * 
	* 
	W0719 12:06:09.070982    4100 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:06:09.082224    4100 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-589000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-19 12:06:09.187906 -0700 PDT m=+3194.529617334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-589000 -n running-upgrade-589000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-589000 -n running-upgrade-589000: exit status 2 (15.630482792s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-589000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | force-systemd-env-164000              | force-systemd-env-164000  | jenkins | v1.33.1 | 19 Jul 24 11:54 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-164000           | force-systemd-env-164000  | jenkins | v1.33.1 | 19 Jul 24 11:54 PDT | 19 Jul 24 11:54 PDT |
	| start   | -p docker-flags-036000                | docker-flags-036000       | jenkins | v1.33.1 | 19 Jul 24 11:54 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-729000             | force-systemd-flag-729000 | jenkins | v1.33.1 | 19 Jul 24 11:54 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-729000          | force-systemd-flag-729000 | jenkins | v1.33.1 | 19 Jul 24 11:54 PDT | 19 Jul 24 11:54 PDT |
	| start   | -p cert-expiration-532000             | cert-expiration-532000    | jenkins | v1.33.1 | 19 Jul 24 11:54 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-036000 ssh               | docker-flags-036000       | jenkins | v1.33.1 | 19 Jul 24 11:55 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-036000 ssh               | docker-flags-036000       | jenkins | v1.33.1 | 19 Jul 24 11:55 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-036000                | docker-flags-036000       | jenkins | v1.33.1 | 19 Jul 24 11:55 PDT | 19 Jul 24 11:55 PDT |
	| start   | -p cert-options-808000                | cert-options-808000       | jenkins | v1.33.1 | 19 Jul 24 11:55 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-808000 ssh               | cert-options-808000       | jenkins | v1.33.1 | 19 Jul 24 11:55 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-808000 -- sudo        | cert-options-808000       | jenkins | v1.33.1 | 19 Jul 24 11:55 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-808000                | cert-options-808000       | jenkins | v1.33.1 | 19 Jul 24 11:55 PDT | 19 Jul 24 11:55 PDT |
	| start   | -p running-upgrade-589000             | minikube                  | jenkins | v1.26.0 | 19 Jul 24 11:55 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-589000             | minikube                  | jenkins | v1.26.0 | 19 Jul 24 11:57 PDT | 19 Jul 24 11:57 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-589000             | running-upgrade-589000    | jenkins | v1.33.1 | 19 Jul 24 11:57 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-532000             | cert-expiration-532000    | jenkins | v1.33.1 | 19 Jul 24 11:58 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-532000             | cert-expiration-532000    | jenkins | v1.33.1 | 19 Jul 24 11:58 PDT | 19 Jul 24 11:58 PDT |
	| start   | -p kubernetes-upgrade-620000          | kubernetes-upgrade-620000 | jenkins | v1.33.1 | 19 Jul 24 11:58 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-620000          | kubernetes-upgrade-620000 | jenkins | v1.33.1 | 19 Jul 24 11:58 PDT | 19 Jul 24 11:58 PDT |
	| start   | -p kubernetes-upgrade-620000          | kubernetes-upgrade-620000 | jenkins | v1.33.1 | 19 Jul 24 11:58 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-620000          | kubernetes-upgrade-620000 | jenkins | v1.33.1 | 19 Jul 24 11:58 PDT | 19 Jul 24 11:58 PDT |
	| start   | -p stopped-upgrade-275000             | minikube                  | jenkins | v1.26.0 | 19 Jul 24 11:58 PDT | 19 Jul 24 11:59 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-275000 stop           | minikube                  | jenkins | v1.26.0 | 19 Jul 24 11:59 PDT | 19 Jul 24 11:59 PDT |
	| start   | -p stopped-upgrade-275000             | stopped-upgrade-275000    | jenkins | v1.33.1 | 19 Jul 24 11:59 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 11:59:24
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 11:59:24.825527    4225 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:59:24.825680    4225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:59:24.825688    4225 out.go:304] Setting ErrFile to fd 2...
	I0719 11:59:24.825691    4225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:59:24.825889    4225 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:59:24.827131    4225 out.go:298] Setting JSON to false
	I0719 11:59:24.846668    4225 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3527,"bootTime":1721412037,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 11:59:24.846749    4225 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:59:24.852125    4225 out.go:177] * [stopped-upgrade-275000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 11:59:24.859121    4225 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 11:59:24.859173    4225 notify.go:220] Checking for updates...
	I0719 11:59:24.865075    4225 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:59:24.866142    4225 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 11:59:24.869052    4225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:59:24.872063    4225 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 11:59:24.875080    4225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 11:59:24.878310    4225 config.go:182] Loaded profile config "stopped-upgrade-275000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 11:59:24.881034    4225 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0719 11:59:24.884089    4225 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:59:24.888086    4225 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 11:59:24.895064    4225 start.go:297] selected driver: qemu2
	I0719 11:59:24.895069    4225 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-275000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-275000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0719 11:59:24.895110    4225 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 11:59:24.897621    4225 cni.go:84] Creating CNI manager for ""
	I0719 11:59:24.897637    4225 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 11:59:24.897677    4225 start.go:340] cluster config:
	{Name:stopped-upgrade-275000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-275000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0719 11:59:24.897725    4225 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:59:24.905044    4225 out.go:177] * Starting "stopped-upgrade-275000" primary control-plane node in "stopped-upgrade-275000" cluster
	I0719 11:59:24.907997    4225 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0719 11:59:24.908010    4225 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0719 11:59:24.908016    4225 cache.go:56] Caching tarball of preloaded images
	I0719 11:59:24.908074    4225 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 11:59:24.908079    4225 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0719 11:59:24.908125    4225 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/config.json ...
	I0719 11:59:24.908507    4225 start.go:360] acquireMachinesLock for stopped-upgrade-275000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:59:24.908537    4225 start.go:364] duration metric: took 24.959µs to acquireMachinesLock for "stopped-upgrade-275000"
	I0719 11:59:24.908546    4225 start.go:96] Skipping create...Using existing machine configuration
	I0719 11:59:24.908550    4225 fix.go:54] fixHost starting: 
	I0719 11:59:24.908646    4225 fix.go:112] recreateIfNeeded on stopped-upgrade-275000: state=Stopped err=<nil>
	W0719 11:59:24.908655    4225 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 11:59:24.915906    4225 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-275000" ...
	I0719 11:59:25.464615    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:59:24.920089    4225 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:59:24.920149    4225 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/stopped-upgrade-275000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/stopped-upgrade-275000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/stopped-upgrade-275000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50502-:22,hostfwd=tcp::50503-:2376,hostname=stopped-upgrade-275000 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/stopped-upgrade-275000/disk.qcow2
	I0719 11:59:24.964919    4225 main.go:141] libmachine: STDOUT: 
	I0719 11:59:24.964947    4225 main.go:141] libmachine: STDERR: 
	I0719 11:59:24.964953    4225 main.go:141] libmachine: Waiting for VM to start (ssh -p 50502 docker@127.0.0.1)...
	I0719 11:59:30.466842    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:59:30.467029    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 11:59:30.479203    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 11:59:30.479277    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 11:59:30.490288    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 11:59:30.490362    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 11:59:30.501257    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 11:59:30.501325    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 11:59:30.511728    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 11:59:30.511793    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 11:59:30.522420    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 11:59:30.522488    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 11:59:30.532730    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 11:59:30.532790    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 11:59:30.547536    4100 logs.go:276] 0 containers: []
	W0719 11:59:30.547548    4100 logs.go:278] No container was found matching "kindnet"
	I0719 11:59:30.547605    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 11:59:30.563420    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 11:59:30.563436    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 11:59:30.563440    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 11:59:30.574961    4100 logs.go:123] Gathering logs for Docker ...
	I0719 11:59:30.574970    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 11:59:30.600902    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 11:59:30.600909    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 11:59:30.605164    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 11:59:30.605173    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 11:59:30.620207    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 11:59:30.620218    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 11:59:30.634844    4100 logs.go:123] Gathering logs for container status ...
	I0719 11:59:30.634855    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 11:59:30.646670    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 11:59:30.646681    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 11:59:30.667668    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 11:59:30.667681    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 11:59:30.706124    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 11:59:30.706134    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 11:59:30.724330    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 11:59:30.724341    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 11:59:30.746505    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 11:59:30.746524    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 11:59:30.760730    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 11:59:30.760740    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 11:59:30.771837    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 11:59:30.771848    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 11:59:30.791322    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 11:59:30.791334    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 11:59:30.808732    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 11:59:30.808741    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 11:59:30.846766    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 11:59:30.846779    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 11:59:33.363351    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:59:38.365571    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:59:38.365741    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 11:59:38.378427    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 11:59:38.378521    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 11:59:38.389592    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 11:59:38.389669    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 11:59:38.401206    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 11:59:38.401280    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 11:59:38.412005    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 11:59:38.412075    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 11:59:38.422138    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 11:59:38.422199    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 11:59:38.432869    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 11:59:38.432937    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 11:59:38.444256    4100 logs.go:276] 0 containers: []
	W0719 11:59:38.444267    4100 logs.go:278] No container was found matching "kindnet"
	I0719 11:59:38.444319    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 11:59:38.456569    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 11:59:38.456591    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 11:59:38.456597    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 11:59:38.478020    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 11:59:38.478036    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 11:59:38.492873    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 11:59:38.492887    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 11:59:38.508444    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 11:59:38.508460    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 11:59:38.520296    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 11:59:38.520307    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 11:59:38.525289    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 11:59:38.525296    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 11:59:38.543586    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 11:59:38.543599    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 11:59:38.581061    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 11:59:38.581076    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 11:59:38.592695    4100 logs.go:123] Gathering logs for container status ...
	I0719 11:59:38.592707    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 11:59:38.604978    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 11:59:38.604990    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 11:59:38.623996    4100 logs.go:123] Gathering logs for Docker ...
	I0719 11:59:38.624009    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 11:59:38.650107    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 11:59:38.650115    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 11:59:38.690022    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 11:59:38.690030    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 11:59:38.704122    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 11:59:38.704132    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 11:59:38.718700    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 11:59:38.718710    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 11:59:38.736319    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 11:59:38.736330    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 11:59:41.250540    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:59:44.748079    4225 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/config.json ...
	I0719 11:59:44.748325    4225 machine.go:94] provisionDockerMachine start ...
	I0719 11:59:44.748377    4225 main.go:141] libmachine: Using SSH client type: native
	I0719 11:59:44.748515    4225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1006f2a10] 0x1006f5270 <nil>  [] 0s} localhost 50502 <nil> <nil>}
	I0719 11:59:44.748520    4225 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 11:59:44.803216    4225 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 11:59:44.803237    4225 buildroot.go:166] provisioning hostname "stopped-upgrade-275000"
	I0719 11:59:44.803285    4225 main.go:141] libmachine: Using SSH client type: native
	I0719 11:59:44.803406    4225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1006f2a10] 0x1006f5270 <nil>  [] 0s} localhost 50502 <nil> <nil>}
	I0719 11:59:44.803413    4225 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-275000 && echo "stopped-upgrade-275000" | sudo tee /etc/hostname
	I0719 11:59:44.862550    4225 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-275000
	
	I0719 11:59:44.862606    4225 main.go:141] libmachine: Using SSH client type: native
	I0719 11:59:44.862725    4225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1006f2a10] 0x1006f5270 <nil>  [] 0s} localhost 50502 <nil> <nil>}
	I0719 11:59:44.862734    4225 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-275000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-275000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-275000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 11:59:44.918279    4225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 11:59:44.918291    4225 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19307-1066/.minikube CaCertPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19307-1066/.minikube}
	I0719 11:59:44.918305    4225 buildroot.go:174] setting up certificates
	I0719 11:59:44.918309    4225 provision.go:84] configureAuth start
	I0719 11:59:44.918313    4225 provision.go:143] copyHostCerts
	I0719 11:59:44.918386    4225 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1066/.minikube/cert.pem, removing ...
	I0719 11:59:44.918393    4225 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1066/.minikube/cert.pem
	I0719 11:59:44.918696    4225 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19307-1066/.minikube/cert.pem (1123 bytes)
	I0719 11:59:44.918924    4225 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1066/.minikube/key.pem, removing ...
	I0719 11:59:44.918928    4225 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1066/.minikube/key.pem
	I0719 11:59:44.918997    4225 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19307-1066/.minikube/key.pem (1679 bytes)
	I0719 11:59:44.919101    4225 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1066/.minikube/ca.pem, removing ...
	I0719 11:59:44.919105    4225 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1066/.minikube/ca.pem
	I0719 11:59:44.919154    4225 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19307-1066/.minikube/ca.pem (1082 bytes)
	I0719 11:59:44.919244    4225 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-275000 san=[127.0.0.1 localhost minikube stopped-upgrade-275000]
	I0719 11:59:45.105085    4225 provision.go:177] copyRemoteCerts
	I0719 11:59:45.105129    4225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 11:59:45.105138    4225 sshutil.go:53] new ssh client: &{IP:localhost Port:50502 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/stopped-upgrade-275000/id_rsa Username:docker}
	I0719 11:59:45.135465    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 11:59:45.142251    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0719 11:59:45.149336    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 11:59:45.156157    4225 provision.go:87] duration metric: took 237.842125ms to configureAuth
	I0719 11:59:45.156165    4225 buildroot.go:189] setting minikube options for container-runtime
	I0719 11:59:45.156273    4225 config.go:182] Loaded profile config "stopped-upgrade-275000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 11:59:45.156305    4225 main.go:141] libmachine: Using SSH client type: native
	I0719 11:59:45.156391    4225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1006f2a10] 0x1006f5270 <nil>  [] 0s} localhost 50502 <nil> <nil>}
	I0719 11:59:45.156395    4225 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 11:59:45.211513    4225 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 11:59:45.211522    4225 buildroot.go:70] root file system type: tmpfs
	I0719 11:59:45.211575    4225 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 11:59:45.211633    4225 main.go:141] libmachine: Using SSH client type: native
	I0719 11:59:45.211745    4225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1006f2a10] 0x1006f5270 <nil>  [] 0s} localhost 50502 <nil> <nil>}
	I0719 11:59:45.211777    4225 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 11:59:45.272077    4225 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 11:59:45.272125    4225 main.go:141] libmachine: Using SSH client type: native
	I0719 11:59:45.272244    4225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1006f2a10] 0x1006f5270 <nil>  [] 0s} localhost 50502 <nil> <nil>}
	I0719 11:59:45.272251    4225 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 11:59:45.615040    4225 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 11:59:45.615053    4225 machine.go:97] duration metric: took 866.734125ms to provisionDockerMachine
	I0719 11:59:45.615059    4225 start.go:293] postStartSetup for "stopped-upgrade-275000" (driver="qemu2")
	I0719 11:59:45.615065    4225 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 11:59:45.615120    4225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 11:59:45.615130    4225 sshutil.go:53] new ssh client: &{IP:localhost Port:50502 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/stopped-upgrade-275000/id_rsa Username:docker}
	I0719 11:59:45.644177    4225 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 11:59:45.645338    4225 info.go:137] Remote host: Buildroot 2021.02.12
	I0719 11:59:45.645347    4225 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19307-1066/.minikube/addons for local assets ...
	I0719 11:59:45.645440    4225 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19307-1066/.minikube/files for local assets ...
	I0719 11:59:45.645560    4225 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19307-1066/.minikube/files/etc/ssl/certs/15652.pem -> 15652.pem in /etc/ssl/certs
	I0719 11:59:45.645694    4225 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 11:59:45.648282    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/files/etc/ssl/certs/15652.pem --> /etc/ssl/certs/15652.pem (1708 bytes)
	I0719 11:59:45.655118    4225 start.go:296] duration metric: took 40.053375ms for postStartSetup
	I0719 11:59:45.655131    4225 fix.go:56] duration metric: took 20.746864s for fixHost
	I0719 11:59:45.655161    4225 main.go:141] libmachine: Using SSH client type: native
	I0719 11:59:45.655266    4225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1006f2a10] 0x1006f5270 <nil>  [] 0s} localhost 50502 <nil> <nil>}
	I0719 11:59:45.655270    4225 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 11:59:45.711912    4225 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721415585.978453962
	
	I0719 11:59:45.711926    4225 fix.go:216] guest clock: 1721415585.978453962
	I0719 11:59:45.711930    4225 fix.go:229] Guest: 2024-07-19 11:59:45.978453962 -0700 PDT Remote: 2024-07-19 11:59:45.655133 -0700 PDT m=+20.861452792 (delta=323.320962ms)
	I0719 11:59:45.711950    4225 fix.go:200] guest clock delta is within tolerance: 323.320962ms
	I0719 11:59:45.711952    4225 start.go:83] releasing machines lock for "stopped-upgrade-275000", held for 20.803694417s
	I0719 11:59:45.712025    4225 ssh_runner.go:195] Run: cat /version.json
	I0719 11:59:45.712037    4225 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 11:59:45.712035    4225 sshutil.go:53] new ssh client: &{IP:localhost Port:50502 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/stopped-upgrade-275000/id_rsa Username:docker}
	I0719 11:59:45.712056    4225 sshutil.go:53] new ssh client: &{IP:localhost Port:50502 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/stopped-upgrade-275000/id_rsa Username:docker}
	W0719 11:59:45.842045    4225 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0719 11:59:45.842104    4225 ssh_runner.go:195] Run: systemctl --version
	I0719 11:59:45.844224    4225 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 11:59:45.845863    4225 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 11:59:45.845890    4225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0719 11:59:45.848666    4225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0719 11:59:45.854032    4225 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 11:59:45.854041    4225 start.go:495] detecting cgroup driver to use...
	I0719 11:59:45.854115    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 11:59:45.865642    4225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0719 11:59:45.868918    4225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 11:59:45.871748    4225 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 11:59:45.871778    4225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 11:59:45.874747    4225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 11:59:45.877730    4225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 11:59:45.880648    4225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 11:59:45.884017    4225 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 11:59:45.886815    4225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 11:59:45.889730    4225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 11:59:45.894234    4225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 11:59:45.897753    4225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 11:59:45.901034    4225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 11:59:45.904124    4225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 11:59:45.967363    4225 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 11:59:45.973805    4225 start.go:495] detecting cgroup driver to use...
	I0719 11:59:45.973892    4225 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 11:59:45.980306    4225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 11:59:45.984899    4225 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 11:59:45.994405    4225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 11:59:45.999600    4225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 11:59:46.004164    4225 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 11:59:46.057713    4225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 11:59:46.063046    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 11:59:46.069140    4225 ssh_runner.go:195] Run: which cri-dockerd
	I0719 11:59:46.070386    4225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 11:59:46.073547    4225 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 11:59:46.078500    4225 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 11:59:46.143276    4225 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 11:59:46.206699    4225 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 11:59:46.206757    4225 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 11:59:46.211964    4225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 11:59:46.277544    4225 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 11:59:47.431168    4225 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.15362275s)
	I0719 11:59:47.431224    4225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 11:59:47.436342    4225 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0719 11:59:47.442643    4225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 11:59:47.447570    4225 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 11:59:47.510545    4225 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 11:59:47.570483    4225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 11:59:47.636459    4225 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 11:59:47.641820    4225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 11:59:47.646413    4225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 11:59:47.710749    4225 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 11:59:47.751005    4225 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 11:59:47.751084    4225 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 11:59:47.753603    4225 start.go:563] Will wait 60s for crictl version
	I0719 11:59:47.753656    4225 ssh_runner.go:195] Run: which crictl
	I0719 11:59:47.755042    4225 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 11:59:47.769117    4225 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0719 11:59:47.769182    4225 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 11:59:47.788802    4225 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 11:59:46.251407    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:59:46.251490    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 11:59:46.268638    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 11:59:46.268702    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 11:59:46.280603    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 11:59:46.280671    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 11:59:46.291508    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 11:59:46.291572    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 11:59:46.302950    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 11:59:46.303014    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 11:59:46.319597    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 11:59:46.319659    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 11:59:46.330152    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 11:59:46.330211    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 11:59:46.340424    4100 logs.go:276] 0 containers: []
	W0719 11:59:46.340437    4100 logs.go:278] No container was found matching "kindnet"
	I0719 11:59:46.340492    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 11:59:46.350818    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 11:59:46.350835    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 11:59:46.350840    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 11:59:46.365197    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 11:59:46.365208    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 11:59:46.376868    4100 logs.go:123] Gathering logs for Docker ...
	I0719 11:59:46.376878    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 11:59:46.403429    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 11:59:46.403439    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 11:59:46.425526    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 11:59:46.425537    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 11:59:46.450058    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 11:59:46.450073    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 11:59:46.465420    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 11:59:46.465435    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 11:59:46.504792    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 11:59:46.504800    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 11:59:46.516286    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 11:59:46.516298    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 11:59:46.530988    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 11:59:46.531001    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 11:59:46.548661    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 11:59:46.548670    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 11:59:46.560740    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 11:59:46.560755    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 11:59:46.565485    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 11:59:46.565491    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 11:59:46.579701    4100 logs.go:123] Gathering logs for container status ...
	I0719 11:59:46.579712    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 11:59:46.591550    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 11:59:46.591566    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 11:59:46.627641    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 11:59:46.627652    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 11:59:47.806168    4225 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0719 11:59:47.806399    4225 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0719 11:59:47.807716    4225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 11:59:47.811267    4225 kubeadm.go:883] updating cluster {Name:stopped-upgrade-275000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-275000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0719 11:59:47.811319    4225 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0719 11:59:47.811371    4225 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 11:59:47.822279    4225 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 11:59:47.822288    4225 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0719 11:59:47.822334    4225 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 11:59:47.825514    4225 ssh_runner.go:195] Run: which lz4
	I0719 11:59:47.826767    4225 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 11:59:47.828014    4225 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 11:59:47.828024    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0719 11:59:48.771142    4225 docker.go:649] duration metric: took 944.418042ms to copy over tarball
	I0719 11:59:48.771199    4225 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 11:59:49.143408    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:59:49.916025    4225 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.144829583s)
	I0719 11:59:49.916041    4225 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 11:59:49.931358    4225 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 11:59:49.934405    4225 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0719 11:59:49.939486    4225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 11:59:50.004249    4225 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 11:59:51.467680    4225 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.463435084s)
	I0719 11:59:51.467777    4225 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 11:59:51.480167    4225 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 11:59:51.480176    4225 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0719 11:59:51.480182    4225 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 11:59:51.485596    4225 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 11:59:51.487244    4225 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0719 11:59:51.489048    4225 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0719 11:59:51.489111    4225 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 11:59:51.490729    4225 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 11:59:51.493027    4225 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0719 11:59:51.493030    4225 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0719 11:59:51.493140    4225 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0719 11:59:51.495124    4225 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0719 11:59:51.495283    4225 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 11:59:51.496582    4225 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0719 11:59:51.496763    4225 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0719 11:59:51.497746    4225 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 11:59:51.498257    4225 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0719 11:59:51.498877    4225 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0719 11:59:51.499481    4225 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 11:59:51.908012    4225 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0719 11:59:51.918404    4225 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0719 11:59:51.918432    4225 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0719 11:59:51.918478    4225 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0719 11:59:51.926897    4225 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0719 11:59:51.929363    4225 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0719 11:59:51.935404    4225 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0719 11:59:51.945306    4225 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0719 11:59:51.945329    4225 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0719 11:59:51.945384    4225 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0719 11:59:51.949847    4225 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0719 11:59:51.951110    4225 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0719 11:59:51.951135    4225 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0719 11:59:51.951166    4225 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0719 11:59:51.956682    4225 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0719 11:59:51.961344    4225 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 11:59:51.962125    4225 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0719 11:59:51.968116    4225 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0719 11:59:51.968138    4225 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0719 11:59:51.968194    4225 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0719 11:59:51.969655    4225 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0719 11:59:51.969760    4225 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0719 11:59:51.979664    4225 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0719 11:59:51.979689    4225 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 11:59:51.979752    4225 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 11:59:51.984205    4225 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0719 11:59:51.984224    4225 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0719 11:59:51.984274    4225 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W0719 11:59:51.988154    4225 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0719 11:59:51.988279    4225 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0719 11:59:51.996887    4225 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0719 11:59:51.996912    4225 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0719 11:59:51.996930    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0719 11:59:51.996976    4225 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0719 11:59:52.025028    4225 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0719 11:59:52.025045    4225 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0719 11:59:52.025066    4225 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 11:59:52.025108    4225 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0719 11:59:52.025132    4225 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0719 11:59:52.068178    4225 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0719 11:59:52.068218    4225 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0719 11:59:52.068221    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0719 11:59:52.068321    4225 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0719 11:59:52.077887    4225 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0719 11:59:52.077904    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0719 11:59:52.081778    4225 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0719 11:59:52.081887    4225 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 11:59:52.092616    4225 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0719 11:59:52.092629    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0719 11:59:52.129652    4225 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0719 11:59:52.129676    4225 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 11:59:52.129730    4225 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 11:59:52.180624    4225 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0719 11:59:52.180648    4225 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0719 11:59:52.180654    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0719 11:59:52.203584    4225 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0719 11:59:52.203713    4225 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0719 11:59:52.297054    4225 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0719 11:59:52.297070    4225 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0719 11:59:52.297100    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0719 11:59:52.363290    4225 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0719 11:59:52.363320    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0719 11:59:52.487664    4225 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0719 11:59:52.487690    4225 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0719 11:59:52.487697    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0719 11:59:52.723640    4225 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0719 11:59:52.723691    4225 cache_images.go:92] duration metric: took 1.243516667s to LoadCachedImages
	W0719 11:59:52.723736    4225 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0719 11:59:52.723742    4225 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0719 11:59:52.723792    4225 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-275000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-275000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 11:59:52.723858    4225 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 11:59:52.737901    4225 cni.go:84] Creating CNI manager for ""
	I0719 11:59:52.737916    4225 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 11:59:52.737921    4225 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 11:59:52.737930    4225 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-275000 NodeName:stopped-upgrade-275000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 11:59:52.737988    4225 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-275000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 11:59:52.738047    4225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0719 11:59:52.741560    4225 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 11:59:52.741591    4225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 11:59:52.744517    4225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0719 11:59:52.749490    4225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 11:59:52.754551    4225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0719 11:59:52.761297    4225 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0719 11:59:52.762561    4225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 11:59:52.766145    4225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 11:59:52.835575    4225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 11:59:52.845900    4225 certs.go:68] Setting up /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000 for IP: 10.0.2.15
	I0719 11:59:52.845911    4225 certs.go:194] generating shared ca certs ...
	I0719 11:59:52.845920    4225 certs.go:226] acquiring lock for ca certs: {Name:mk315b805d576c08b7c87d345baabbe459ef4715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 11:59:52.846098    4225 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/ca.key
	I0719 11:59:52.846151    4225 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/proxy-client-ca.key
	I0719 11:59:52.846156    4225 certs.go:256] generating profile certs ...
	I0719 11:59:52.846217    4225 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/client.key
	I0719 11:59:52.846238    4225 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.key.ef0f4bd6
	I0719 11:59:52.846250    4225 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.crt.ef0f4bd6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0719 11:59:52.970195    4225 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.crt.ef0f4bd6 ...
	I0719 11:59:52.970209    4225 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.crt.ef0f4bd6: {Name:mk8106679c8ec9d10f63c1edbf0c3509686f0e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 11:59:52.970551    4225 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.key.ef0f4bd6 ...
	I0719 11:59:52.970557    4225 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.key.ef0f4bd6: {Name:mk601f4ec21661ecc272a2663420b49625baa029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 11:59:52.970707    4225 certs.go:381] copying /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.crt.ef0f4bd6 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.crt
	I0719 11:59:52.971377    4225 certs.go:385] copying /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.key.ef0f4bd6 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.key
	I0719 11:59:52.971542    4225 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/proxy-client.key
	I0719 11:59:52.971686    4225 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/1565.pem (1338 bytes)
	W0719 11:59:52.971718    4225 certs.go:480] ignoring /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/1565_empty.pem, impossibly tiny 0 bytes
	I0719 11:59:52.971724    4225 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 11:59:52.971744    4225 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem (1082 bytes)
	I0719 11:59:52.971763    4225 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem (1123 bytes)
	I0719 11:59:52.971781    4225 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/key.pem (1679 bytes)
	I0719 11:59:52.971820    4225 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/files/etc/ssl/certs/15652.pem (1708 bytes)
	I0719 11:59:52.972127    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 11:59:52.979000    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 11:59:52.986022    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 11:59:52.993410    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 11:59:53.000867    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 11:59:53.008114    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 11:59:53.014613    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 11:59:53.021982    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 11:59:53.029885    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/1565.pem --> /usr/share/ca-certificates/1565.pem (1338 bytes)
	I0719 11:59:53.037331    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/files/etc/ssl/certs/15652.pem --> /usr/share/ca-certificates/15652.pem (1708 bytes)
	I0719 11:59:53.044419    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 11:59:53.050948    4225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 11:59:53.056102    4225 ssh_runner.go:195] Run: openssl version
	I0719 11:59:53.057958    4225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15652.pem && ln -fs /usr/share/ca-certificates/15652.pem /etc/ssl/certs/15652.pem"
	I0719 11:59:53.061032    4225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15652.pem
	I0719 11:59:53.062421    4225 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:20 /usr/share/ca-certificates/15652.pem
	I0719 11:59:53.062442    4225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15652.pem
	I0719 11:59:53.064193    4225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15652.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 11:59:53.066939    4225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 11:59:53.070178    4225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 11:59:53.071747    4225 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0719 11:59:53.071763    4225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 11:59:53.073548    4225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 11:59:53.076740    4225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1565.pem && ln -fs /usr/share/ca-certificates/1565.pem /etc/ssl/certs/1565.pem"
	I0719 11:59:53.079554    4225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1565.pem
	I0719 11:59:53.080884    4225 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:20 /usr/share/ca-certificates/1565.pem
	I0719 11:59:53.080898    4225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1565.pem
	I0719 11:59:53.082753    4225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1565.pem /etc/ssl/certs/51391683.0"
	I0719 11:59:53.086035    4225 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 11:59:53.087571    4225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 11:59:53.089455    4225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 11:59:53.091501    4225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 11:59:53.093419    4225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 11:59:53.095394    4225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 11:59:53.097157    4225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 11:59:53.099022    4225 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-275000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-275000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0719 11:59:53.099091    4225 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 11:59:53.110355    4225 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 11:59:53.113792    4225 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 11:59:53.113798    4225 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 11:59:53.113821    4225 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 11:59:53.116729    4225 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 11:59:53.117024    4225 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-275000" does not appear in /Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:59:53.117123    4225 kubeconfig.go:62] /Users/jenkins/minikube-integration/19307-1066/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-275000" cluster setting kubeconfig missing "stopped-upgrade-275000" context setting]
	I0719 11:59:53.117393    4225 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/kubeconfig: {Name:mk4dabaac160a2c10ee03f7aa88bffdd6270bcb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 11:59:53.117810    4225 kapi.go:59] client config for stopped-upgrade-275000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/client.key", CAFile:"/Users/jenkins/minikube-integration/19307-1066/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101a87790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 11:59:53.118122    4225 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 11:59:53.120903    4225 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-275000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0719 11:59:53.120908    4225 kubeadm.go:1160] stopping kube-system containers ...
	I0719 11:59:53.120942    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 11:59:53.131835    4225 docker.go:483] Stopping containers: [88b7f06c953c 02a941fd8e55 f46177018be0 11f4036961d9 8db569ae2b3e 3e008b48c13a 0f3bce8296ce 79c60209a5a1]
	I0719 11:59:53.131903    4225 ssh_runner.go:195] Run: docker stop 88b7f06c953c 02a941fd8e55 f46177018be0 11f4036961d9 8db569ae2b3e 3e008b48c13a 0f3bce8296ce 79c60209a5a1
	I0719 11:59:53.142599    4225 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 11:59:53.148533    4225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 11:59:53.151188    4225 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 11:59:53.151193    4225 kubeadm.go:157] found existing configuration files:
	
	I0719 11:59:53.151216    4225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf
	I0719 11:59:53.153892    4225 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 11:59:53.153914    4225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 11:59:53.156865    4225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf
	I0719 11:59:53.159268    4225 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 11:59:53.159288    4225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 11:59:53.162082    4225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf
	I0719 11:59:53.164966    4225 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 11:59:53.164987    4225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 11:59:53.167669    4225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf
	I0719 11:59:53.170197    4225 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 11:59:53.170220    4225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 11:59:53.173131    4225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 11:59:53.175736    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 11:59:53.197134    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 11:59:53.514199    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 11:59:53.624513    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 11:59:53.652882    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 11:59:53.675376    4225 api_server.go:52] waiting for apiserver process to appear ...
	I0719 11:59:53.675466    4225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 11:59:54.177221    4225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 11:59:54.676607    4225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 11:59:54.680962    4225 api_server.go:72] duration metric: took 1.005603958s to wait for apiserver process to appear ...
	I0719 11:59:54.680970    4225 api_server.go:88] waiting for apiserver healthz status ...
	I0719 11:59:54.680979    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:59:54.145595    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:59:54.145789    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 11:59:54.172534    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 11:59:54.172644    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 11:59:54.189676    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 11:59:54.189746    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 11:59:54.204192    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 11:59:54.204267    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 11:59:54.220572    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 11:59:54.220646    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 11:59:54.232196    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 11:59:54.232260    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 11:59:54.244200    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 11:59:54.244264    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 11:59:54.255973    4100 logs.go:276] 0 containers: []
	W0719 11:59:54.255988    4100 logs.go:278] No container was found matching "kindnet"
	I0719 11:59:54.256059    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 11:59:54.269255    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 11:59:54.269276    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 11:59:54.269285    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 11:59:54.310669    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 11:59:54.310682    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 11:59:54.327460    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 11:59:54.327472    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 11:59:54.342878    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 11:59:54.342889    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 11:59:54.369673    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 11:59:54.369689    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 11:59:54.387291    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 11:59:54.387308    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 11:59:54.402809    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 11:59:54.402824    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 11:59:54.415692    4100 logs.go:123] Gathering logs for container status ...
	I0719 11:59:54.415705    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 11:59:54.428988    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 11:59:54.429001    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 11:59:54.470736    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 11:59:54.470757    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 11:59:54.486426    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 11:59:54.486441    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 11:59:54.502247    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 11:59:54.502262    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 11:59:54.517701    4100 logs.go:123] Gathering logs for Docker ...
	I0719 11:59:54.517715    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 11:59:54.543899    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 11:59:54.543916    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 11:59:54.549233    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 11:59:54.549271    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 11:59:54.562431    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 11:59:54.562446    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 11:59:57.085188    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:59:59.683149    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:59:59.683234    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:02.086864    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:02.087013    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:00:02.103619    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:00:02.103722    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:00:02.116509    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:00:02.116574    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:00:02.128058    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:00:02.128123    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:00:02.138413    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:00:02.138476    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:00:02.148711    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:00:02.148775    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:00:02.160860    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:00:02.160930    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:00:02.170970    4100 logs.go:276] 0 containers: []
	W0719 12:00:02.170980    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:00:02.171027    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:00:02.181495    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:00:02.181514    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:00:02.181519    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:00:02.186383    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:00:02.186389    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:00:02.207194    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:00:02.207205    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:00:02.222584    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:00:02.222595    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:00:02.234129    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:00:02.234140    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:00:02.252264    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:00:02.252275    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:00:02.267633    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:00:02.267644    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:00:02.293141    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:00:02.293152    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:00:02.333594    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:00:02.333606    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:00:02.348384    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:00:02.348395    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:00:02.362853    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:00:02.362864    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:00:02.380030    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:00:02.380041    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:00:02.394075    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:00:02.394087    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:00:02.408657    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:00:02.408668    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:00:02.449526    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:00:02.449550    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:00:02.471486    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:00:02.471497    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:00:04.684253    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:04.684328    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:04.986320    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:09.685416    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:09.685543    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:09.988664    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:09.988818    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:00:10.000390    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:00:10.000459    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:00:10.010906    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:00:10.010979    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:00:10.021324    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:00:10.021390    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:00:10.032331    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:00:10.032401    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:00:10.042600    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:00:10.042672    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:00:10.057280    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:00:10.057349    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:00:10.067517    4100 logs.go:276] 0 containers: []
	W0719 12:00:10.067529    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:00:10.067587    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:00:10.078541    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:00:10.078557    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:00:10.078562    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:00:10.092857    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:00:10.092868    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:00:10.104929    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:00:10.104946    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:00:10.109686    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:00:10.109691    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:00:10.130724    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:00:10.130736    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:00:10.148179    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:00:10.148190    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:00:10.162705    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:00:10.162717    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:00:10.202692    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:00:10.202706    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:00:10.238746    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:00:10.238758    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:00:10.263654    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:00:10.263665    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:00:10.275235    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:00:10.275248    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:00:10.287765    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:00:10.287780    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:00:10.304618    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:00:10.304628    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:00:10.326496    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:00:10.326507    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:00:10.338479    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:00:10.338490    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:00:10.352539    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:00:10.352549    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:00:12.869088    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:14.686875    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:14.686897    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:17.871351    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:17.871509    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:00:17.887671    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:00:17.887761    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:00:17.900204    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:00:17.900281    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:00:17.911905    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:00:17.911976    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:00:17.922796    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:00:17.922865    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:00:17.939194    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:00:17.939262    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:00:17.951737    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:00:17.951809    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:00:17.962043    4100 logs.go:276] 0 containers: []
	W0719 12:00:17.962061    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:00:17.962119    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:00:17.972089    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:00:17.972106    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:00:17.972112    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:00:17.976507    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:00:17.976513    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:00:18.013730    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:00:18.013741    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:00:18.028193    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:00:18.028205    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:00:18.043174    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:00:18.043186    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:00:18.070363    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:00:18.070379    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:00:18.084784    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:00:18.084798    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:00:18.099213    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:00:18.099222    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:00:18.113262    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:00:18.113272    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:00:18.124272    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:00:18.124284    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:00:19.688155    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:19.688240    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:18.148819    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:00:18.148829    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:00:18.189281    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:00:18.189289    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:00:18.200505    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:00:18.200517    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:00:18.224302    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:00:18.224312    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:00:18.238400    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:00:18.238414    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:00:18.250763    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:00:18.250775    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:00:20.766789    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:24.690718    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:24.690759    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:25.769501    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:25.769925    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:00:25.813466    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:00:25.813606    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:00:25.834883    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:00:25.834986    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:00:25.852576    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:00:25.852652    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:00:25.864742    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:00:25.864811    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:00:25.875527    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:00:25.875607    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:00:25.886127    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:00:25.886206    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:00:25.896775    4100 logs.go:276] 0 containers: []
	W0719 12:00:25.896784    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:00:25.896844    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:00:25.908774    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:00:25.908797    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:00:25.908804    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:00:25.950142    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:00:25.950165    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:00:25.964719    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:00:25.964733    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:00:25.980911    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:00:25.980924    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:00:26.002186    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:00:26.002196    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:00:26.023873    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:00:26.023884    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:00:26.049241    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:00:26.049254    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:00:26.093487    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:00:26.093498    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:00:26.108430    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:00:26.108439    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:00:26.128697    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:00:26.128708    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:00:26.140709    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:00:26.140720    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:00:26.154696    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:00:26.154707    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:00:26.166262    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:00:26.166271    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:00:26.178153    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:00:26.178163    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:00:26.184385    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:00:26.184393    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:00:26.198803    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:00:26.198814    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:00:29.693087    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:29.693158    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:28.714678    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:34.695605    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:34.695652    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:33.717049    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:33.717467    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:00:33.767328    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:00:33.767434    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:00:33.784480    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:00:33.784558    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:00:33.797161    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:00:33.797236    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:00:33.811135    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:00:33.811212    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:00:33.822956    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:00:33.823029    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:00:33.834555    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:00:33.834619    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:00:33.844948    4100 logs.go:276] 0 containers: []
	W0719 12:00:33.844961    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:00:33.845018    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:00:33.856050    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:00:33.856069    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:00:33.856075    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:00:33.868157    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:00:33.868168    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:00:33.882627    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:00:33.882638    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:00:33.898541    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:00:33.898551    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:00:33.910841    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:00:33.910851    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:00:33.931127    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:00:33.931137    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:00:33.968548    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:00:33.968555    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:00:33.972903    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:00:33.972913    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:00:33.987589    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:00:33.987600    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:00:34.009887    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:00:34.009898    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:00:34.026504    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:00:34.026516    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:00:34.044871    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:00:34.044884    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:00:34.073055    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:00:34.073067    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:00:34.085401    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:00:34.085413    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:00:34.120698    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:00:34.120709    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:00:34.142005    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:00:34.142015    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:00:36.667606    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:39.695965    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:39.696031    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:41.670064    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:41.670287    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:00:41.693768    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:00:41.693860    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:00:41.708715    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:00:41.708802    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:00:41.721068    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:00:41.721129    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:00:41.731901    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:00:41.731968    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:00:41.742850    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:00:41.742907    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:00:41.753743    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:00:41.753803    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:00:41.763457    4100 logs.go:276] 0 containers: []
	W0719 12:00:41.763467    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:00:41.763514    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:00:41.774337    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:00:41.774356    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:00:41.774362    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:00:41.779153    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:00:41.779159    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:00:41.815348    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:00:41.815359    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:00:41.829463    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:00:41.829473    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:00:41.844274    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:00:41.844284    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:00:41.855114    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:00:41.855126    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:00:41.867187    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:00:41.867198    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:00:41.891937    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:00:41.891946    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:00:41.932221    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:00:41.932231    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:00:41.958035    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:00:41.958045    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:00:41.972186    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:00:41.972197    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:00:41.987392    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:00:41.987405    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:00:42.000350    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:00:42.000360    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:00:42.019097    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:00:42.019107    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:00:42.034381    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:00:42.034391    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:00:42.050979    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:00:42.050989    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:00:44.698412    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:44.698454    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:44.564801    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:49.700606    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:49.700622    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:49.567052    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:49.567320    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:00:49.593283    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:00:49.593406    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:00:49.611100    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:00:49.611181    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:00:49.624251    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:00:49.624311    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:00:49.635741    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:00:49.635803    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:00:49.646190    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:00:49.646258    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:00:49.657482    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:00:49.657544    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:00:49.668078    4100 logs.go:276] 0 containers: []
	W0719 12:00:49.668089    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:00:49.668143    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:00:49.683963    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:00:49.683980    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:00:49.683986    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:00:49.708276    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:00:49.708285    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:00:49.721953    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:00:49.721965    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:00:49.734213    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:00:49.734227    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:00:49.751991    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:00:49.752004    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:00:49.764168    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:00:49.764180    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:00:49.800229    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:00:49.800240    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:00:49.814606    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:00:49.814615    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:00:49.829562    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:00:49.829572    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:00:49.833914    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:00:49.833919    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:00:49.847571    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:00:49.847582    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:00:49.862054    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:00:49.862068    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:00:49.877047    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:00:49.877058    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:00:49.894700    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:00:49.894713    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:00:49.909854    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:00:49.909864    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:00:49.948058    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:00:49.948067    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:00:52.470915    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:54.702689    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:54.702822    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:00:54.720188    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:00:54.720288    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:00:54.730940    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:00:54.731003    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:00:54.741578    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:00:54.741649    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:00:54.752453    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:00:54.752523    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:00:54.767080    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:00:54.767141    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:00:54.777687    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:00:54.777753    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:00:54.788001    4225 logs.go:276] 0 containers: []
	W0719 12:00:54.788012    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:00:54.788062    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:00:54.798540    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:00:54.798555    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:00:54.798560    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:00:54.824626    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:00:54.824635    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:00:57.473182    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:57.473374    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:00:57.485519    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:00:57.485604    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:00:57.495806    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:00:57.495872    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:00:57.506034    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:00:57.506104    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:00:57.517128    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:00:57.517192    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:00:57.527591    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:00:57.527663    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:00:57.538321    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:00:57.538386    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:00:57.548895    4100 logs.go:276] 0 containers: []
	W0719 12:00:57.548906    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:00:57.548958    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:00:57.559562    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:00:57.559578    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:00:57.559583    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:00:57.576678    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:00:57.576688    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:00:57.581097    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:00:57.581107    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:00:57.617479    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:00:57.617503    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:00:57.639473    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:00:57.639484    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:00:57.653344    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:00:57.653356    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:00:57.667533    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:00:57.667543    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:00:57.682646    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:00:57.682657    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:00:57.694996    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:00:57.695007    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:00:57.707431    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:00:57.707439    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:00:57.732377    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:00:57.732394    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:00:57.772368    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:00:57.772377    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:00:57.794385    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:00:57.794399    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:00:57.816429    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:00:57.816445    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:00:57.828384    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:00:57.828396    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:00:57.842927    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:00:57.842938    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:00:54.836431    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:00:54.836442    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:00:54.883849    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:00:54.883868    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:00:54.909361    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:00:54.909375    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:00:54.928766    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:00:54.928786    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:00:54.945413    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:00:54.945431    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:00:54.956880    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:00:54.956895    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:00:54.961399    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:00:54.961406    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:00:55.065338    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:00:55.065353    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:00:55.079569    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:00:55.079583    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:00:55.098111    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:00:55.098121    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:00:55.139025    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:00:55.139033    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:00:55.153605    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:00:55.153621    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:00:55.165071    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:00:55.165086    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:00:55.176324    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:00:55.176338    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:00:55.192083    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:00:55.192097    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:00:57.705199    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:00.355401    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:02.707403    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:02.707574    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:02.728268    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:01:02.728356    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:02.742800    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:01:02.742874    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:02.754592    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:01:02.754662    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:02.764944    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:01:02.765011    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:02.776604    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:01:02.776675    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:02.792384    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:01:02.792461    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:02.807168    4225 logs.go:276] 0 containers: []
	W0719 12:01:02.807179    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:02.807234    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:02.818049    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:01:02.818068    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:01:02.818073    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:01:02.829979    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:01:02.829994    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:01:02.844146    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:01:02.844156    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:01:02.855583    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:01:02.855597    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:01:02.866749    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:01:02.866764    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:01:02.878501    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:01:02.878512    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:02.890291    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:01:02.890302    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:01:02.908793    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:01:02.908807    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:01:02.920736    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:01:02.920748    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:01:02.932533    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:02.932544    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:02.958523    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:02.958531    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:02.962598    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:01:02.962605    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:01:02.980030    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:01:02.980044    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:01:02.994641    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:01:02.994652    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:01:03.033057    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:03.033072    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:03.069430    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:03.069436    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:03.109734    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:01:03.109747    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:01:05.357718    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:05.358064    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:05.388676    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:01:05.388803    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:05.406938    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:01:05.407021    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:05.420790    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:01:05.420864    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:05.432427    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:01:05.432497    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:05.442897    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:01:05.442967    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:05.453789    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:01:05.453867    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:05.463878    4100 logs.go:276] 0 containers: []
	W0719 12:01:05.463889    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:05.463945    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:05.476323    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:01:05.476340    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:05.476346    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:05.515078    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:01:05.515090    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:05.526935    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:01:05.526949    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:01:05.541265    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:05.541277    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:05.578748    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:01:05.578760    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:01:05.592150    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:01:05.592161    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:01:05.606602    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:01:05.606614    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:01:05.621070    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:01:05.621083    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:01:05.638281    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:01:05.638290    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:01:05.649732    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:05.649742    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:05.654582    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:01:05.654589    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:01:05.675565    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:01:05.675578    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:01:05.690004    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:01:05.690014    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:01:05.707017    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:01:05.707027    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:01:05.718782    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:01:05.718793    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:01:05.733461    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:05.733473    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:05.627607    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:08.260327    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:10.629848    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:10.630095    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:10.655682    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:01:10.655803    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:10.672734    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:01:10.672814    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:10.686877    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:01:10.686952    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:10.702842    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:01:10.702913    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:10.715619    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:01:10.715691    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:10.726878    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:01:10.726952    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:10.736467    4225 logs.go:276] 0 containers: []
	W0719 12:01:10.736479    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:10.736534    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:10.746650    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:01:10.746668    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:10.746674    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:10.750799    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:01:10.750807    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:01:10.788865    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:01:10.788877    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:01:10.803923    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:10.803934    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:10.839922    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:01:10.839933    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:01:10.858864    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:01:10.858875    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:01:10.875177    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:01:10.875188    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:10.887118    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:01:10.887129    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:01:10.901470    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:01:10.901480    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:01:10.915464    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:01:10.915475    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:01:10.929419    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:01:10.929431    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:01:10.944102    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:01:10.944112    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:01:10.955849    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:10.955860    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:10.979959    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:10.979967    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:11.015974    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:01:11.015981    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:01:11.027504    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:01:11.027515    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:01:11.041291    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:01:11.041302    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:01:13.553970    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:13.262639    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:13.262886    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:13.288268    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:01:13.288364    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:13.304968    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:01:13.305047    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:13.317864    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:01:13.317935    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:13.329369    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:01:13.329447    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:13.339747    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:01:13.339811    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:13.352813    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:01:13.352880    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:13.362854    4100 logs.go:276] 0 containers: []
	W0719 12:01:13.362866    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:13.362925    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:13.373445    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:01:13.373461    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:01:13.373467    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:01:13.385369    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:13.385380    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:13.408928    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:13.408937    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:13.446474    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:01:13.446482    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:01:13.466861    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:01:13.466871    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:01:13.481420    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:13.481432    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:13.486407    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:01:13.486413    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:01:13.500490    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:01:13.500500    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:01:13.516239    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:01:13.516249    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:01:13.530790    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:01:13.530801    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:01:13.542217    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:01:13.542228    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:13.554190    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:01:13.554201    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:01:13.568311    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:01:13.568322    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:01:13.583421    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:01:13.583432    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:01:13.603156    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:01:13.603171    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:01:13.617359    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:13.617371    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:16.153735    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:18.556122    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:18.556289    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:18.578442    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:01:18.578507    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:18.589186    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:01:18.589255    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:18.599446    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:01:18.599523    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:18.609941    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:01:18.610020    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:18.620643    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:01:18.620705    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:18.630994    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:01:18.631057    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:18.640934    4225 logs.go:276] 0 containers: []
	W0719 12:01:18.640947    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:18.640994    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:18.651301    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:01:18.651322    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:18.651328    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:18.688448    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:01:18.688458    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:01:18.702702    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:01:18.702713    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:01:18.715750    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:01:18.715759    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:18.728109    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:18.728120    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:18.762573    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:01:18.762584    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:01:18.800959    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:01:18.800971    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:01:18.812472    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:01:18.812484    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:01:18.824058    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:18.824070    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:18.828745    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:01:18.828754    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:01:18.842810    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:01:18.842821    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:01:18.857184    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:01:18.857196    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:01:18.868947    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:01:18.868961    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:01:18.885097    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:01:18.885107    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:01:18.896629    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:01:18.896639    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:01:18.916374    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:01:18.916385    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:01:18.928341    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:18.928352    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:21.156577    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:21.157027    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:21.198667    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:01:21.198834    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:21.226557    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:01:21.226658    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:21.240970    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:01:21.241040    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:21.252825    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:01:21.252899    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:21.263853    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:01:21.263924    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:21.274831    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:01:21.274903    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:21.289984    4100 logs.go:276] 0 containers: []
	W0719 12:01:21.289995    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:21.290059    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:21.308400    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:01:21.308417    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:01:21.308424    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:01:21.323794    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:01:21.323804    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:01:21.335075    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:01:21.335086    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:01:21.350294    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:21.350304    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:21.392387    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:21.392413    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:21.397301    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:21.397312    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:21.439834    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:01:21.439845    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:01:21.462092    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:01:21.462105    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:01:21.476278    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:01:21.476292    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:01:21.490803    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:01:21.490813    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:01:21.505235    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:01:21.505250    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:01:21.516852    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:21.516861    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:21.540725    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:01:21.540733    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:01:21.551928    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:01:21.551940    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:01:21.569795    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:01:21.569809    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:21.581803    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:01:21.581816    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:01:21.454333    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:24.099362    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:26.456583    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:26.456780    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:26.475429    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:01:26.475502    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:26.487488    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:01:26.487561    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:26.497549    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:01:26.497621    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:26.507955    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:01:26.508028    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:26.519768    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:01:26.519835    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:26.530419    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:01:26.530484    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:26.540334    4225 logs.go:276] 0 containers: []
	W0719 12:01:26.540355    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:26.540411    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:26.550904    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:01:26.550926    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:01:26.550931    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:01:26.568528    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:01:26.568538    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:01:26.606634    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:01:26.606646    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:01:26.621337    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:01:26.621347    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:01:26.632327    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:01:26.632339    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:26.643841    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:01:26.643851    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:01:26.659143    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:01:26.659155    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:01:26.673868    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:01:26.673880    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:01:26.685422    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:01:26.685433    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:01:26.697002    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:01:26.697012    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:01:26.712481    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:26.712496    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:26.750580    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:26.750589    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:26.754874    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:01:26.754883    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:01:26.766163    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:01:26.766173    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:01:26.783618    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:26.783629    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:26.808781    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:26.808788    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:26.843697    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:01:26.843710    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:01:29.356753    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:29.101567    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:29.101778    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:29.125094    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:01:29.125220    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:29.141813    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:01:29.141895    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:29.163042    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:01:29.163107    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:29.174129    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:01:29.174203    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:29.186584    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:01:29.186651    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:29.198076    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:01:29.198144    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:29.208479    4100 logs.go:276] 0 containers: []
	W0719 12:01:29.208491    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:29.208548    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:29.219134    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:01:29.219157    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:29.219164    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:29.258273    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:29.258283    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:29.263033    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:29.263041    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:29.297712    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:01:29.297724    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:01:29.313217    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:01:29.313228    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:01:29.333740    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:01:29.333753    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:01:29.348697    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:29.348708    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:29.373160    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:01:29.373167    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:01:29.387045    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:01:29.387055    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:01:29.398765    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:01:29.398775    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:01:29.413151    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:01:29.413164    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:01:29.427518    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:01:29.427530    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:01:29.445629    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:01:29.445642    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:01:29.457322    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:01:29.457335    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:01:29.472709    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:01:29.472719    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:01:29.487813    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:01:29.487822    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:32.001510    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:34.358884    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:34.359073    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:34.380250    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:01:34.380358    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:34.394988    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:01:34.395066    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:34.407575    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:01:34.407640    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:34.418079    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:01:34.418148    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:34.429002    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:01:34.429068    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:34.440014    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:01:34.440083    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:34.450581    4225 logs.go:276] 0 containers: []
	W0719 12:01:34.450592    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:34.450647    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:34.461323    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:01:34.461342    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:34.461348    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:34.497979    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:34.497999    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:34.536912    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:01:34.536924    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:01:34.551402    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:01:34.551413    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:01:34.565637    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:01:34.565648    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:01:34.581595    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:01:34.581605    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:01:34.592652    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:01:34.592663    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:34.605662    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:01:34.605674    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:01:34.624057    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:01:34.624067    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:01:34.664104    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:01:34.664117    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:01:34.678439    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:01:34.678451    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:01:34.690482    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:01:34.690494    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:01:34.702129    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:01:34.702140    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:01:34.714192    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:34.714206    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:34.738120    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:34.738129    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:34.742256    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:01:34.742262    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:01:34.753100    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:01:34.753112    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:01:37.003888    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:37.004306    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:37.039122    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:01:37.039257    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:37.058691    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:01:37.058788    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:37.073502    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:01:37.073579    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:37.085728    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:01:37.085804    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:37.101336    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:01:37.101405    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:37.112061    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:01:37.112128    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:37.122472    4100 logs.go:276] 0 containers: []
	W0719 12:01:37.122488    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:37.122553    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:37.133274    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:01:37.133304    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:37.133312    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:37.169239    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:01:37.169254    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:01:37.183548    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:01:37.183559    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:01:37.198838    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:01:37.198848    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:01:37.213445    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:01:37.213455    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:01:37.230585    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:37.230597    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:37.254407    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:37.254415    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:37.293777    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:01:37.293787    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:01:37.314577    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:01:37.314586    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:01:37.326384    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:01:37.326396    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:01:37.341414    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:01:37.341423    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:37.354707    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:37.354720    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:37.359486    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:01:37.359493    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:01:37.377761    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:01:37.377770    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:01:37.392750    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:01:37.392759    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:01:37.404431    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:01:37.404440    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:01:37.277428    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:39.917882    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:42.279641    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:42.280061    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:42.316668    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:01:42.316808    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:42.336707    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:01:42.336802    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:42.350720    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:01:42.350798    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:42.362938    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:01:42.363010    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:42.374629    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:01:42.374691    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:42.387117    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:01:42.387182    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:42.398225    4225 logs.go:276] 0 containers: []
	W0719 12:01:42.398237    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:42.398288    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:42.409333    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:01:42.409353    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:01:42.409359    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:01:42.422432    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:01:42.422445    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:01:42.434623    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:01:42.434637    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:01:42.446317    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:01:42.446329    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:01:42.458424    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:01:42.458436    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:01:42.469918    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:42.469928    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:42.508227    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:42.508242    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:42.513954    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:42.513964    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:42.549358    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:01:42.549372    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:01:42.571839    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:01:42.571854    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:01:42.585725    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:01:42.585739    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:01:42.623111    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:42.623123    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:42.648267    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:01:42.648276    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:42.661669    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:01:42.661683    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:01:42.675769    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:01:42.675796    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:01:42.695667    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:01:42.695679    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:01:42.709565    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:01:42.709580    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:01:44.920061    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:44.920224    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:44.933321    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:01:44.933400    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:44.945102    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:01:44.945170    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:44.955438    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:01:44.955502    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:44.965373    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:01:44.965442    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:44.976473    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:01:44.976541    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:44.986282    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:01:44.986347    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:44.996339    4100 logs.go:276] 0 containers: []
	W0719 12:01:44.996349    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:44.996401    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:45.006629    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:01:45.006650    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:45.006655    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:45.041585    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:01:45.041599    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:01:45.056262    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:01:45.056272    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:01:45.078393    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:01:45.078403    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:01:45.093026    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:01:45.093039    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:01:45.105741    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:45.105753    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:45.110968    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:01:45.110975    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:01:45.130341    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:01:45.130352    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:01:45.146823    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:01:45.146832    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:01:45.164913    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:01:45.164924    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:01:45.178732    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:01:45.178743    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:01:45.196253    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:01:45.196264    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:01:45.210837    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:45.210846    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:45.235006    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:45.235013    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:45.274880    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:01:45.274888    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:45.286636    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:01:45.286647    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:01:47.800584    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:45.229274    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:52.803176    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:52.803381    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:52.815091    4100 logs.go:276] 2 containers: [8e8e2f8d23b3 b8f4445650ff]
	I0719 12:01:52.815161    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:52.830718    4100 logs.go:276] 2 containers: [c124f6d6c9be 213784f515d6]
	I0719 12:01:52.830792    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:52.841556    4100 logs.go:276] 1 containers: [849a5406c967]
	I0719 12:01:52.841624    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:52.852174    4100 logs.go:276] 2 containers: [e2faaa8a6a14 4af0fa5b107c]
	I0719 12:01:52.852245    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:52.862846    4100 logs.go:276] 1 containers: [80d719841390]
	I0719 12:01:52.862911    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:52.873746    4100 logs.go:276] 2 containers: [701fade7f831 3ed1a881f9e2]
	I0719 12:01:52.873820    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:52.888397    4100 logs.go:276] 0 containers: []
	W0719 12:01:52.888409    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:52.888466    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:52.898549    4100 logs.go:276] 1 containers: [3a65181cdb60]
	I0719 12:01:52.898568    4100 logs.go:123] Gathering logs for kube-scheduler [4af0fa5b107c] ...
	I0719 12:01:52.898573    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4af0fa5b107c"
	I0719 12:01:52.913165    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:52.913174    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:52.935716    4100 logs.go:123] Gathering logs for storage-provisioner [3a65181cdb60] ...
	I0719 12:01:52.935725    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65181cdb60"
	I0719 12:01:52.947759    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:52.947772    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:52.985967    4100 logs.go:123] Gathering logs for kube-apiserver [8e8e2f8d23b3] ...
	I0719 12:01:52.985975    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e8e2f8d23b3"
	I0719 12:01:53.008328    4100 logs.go:123] Gathering logs for kube-apiserver [b8f4445650ff] ...
	I0719 12:01:53.008338    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8f4445650ff"
	I0719 12:01:53.028688    4100 logs.go:123] Gathering logs for etcd [c124f6d6c9be] ...
	I0719 12:01:53.028699    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c124f6d6c9be"
	I0719 12:01:53.043193    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:53.043205    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:53.047820    4100 logs.go:123] Gathering logs for kube-controller-manager [701fade7f831] ...
	I0719 12:01:53.047830    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 701fade7f831"
	I0719 12:01:53.065341    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:01:53.065351    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:53.077089    4100 logs.go:123] Gathering logs for kube-proxy [80d719841390] ...
	I0719 12:01:53.077100    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80d719841390"
	I0719 12:01:53.089067    4100 logs.go:123] Gathering logs for kube-controller-manager [3ed1a881f9e2] ...
	I0719 12:01:53.089079    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ed1a881f9e2"
	I0719 12:01:53.103664    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:53.103676    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:50.231414    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:50.231615    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:50.249123    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:01:50.249211    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:50.262213    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:01:50.262287    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:50.273990    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:01:50.274050    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:50.284773    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:01:50.284834    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:50.295244    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:01:50.295312    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:50.306415    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:01:50.306482    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:50.320159    4225 logs.go:276] 0 containers: []
	W0719 12:01:50.320173    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:50.320236    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:50.331144    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:01:50.331161    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:01:50.331167    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:01:50.342589    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:01:50.342598    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:01:50.354076    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:01:50.354089    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:01:50.365642    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:50.365654    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:50.402215    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:01:50.402231    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:01:50.416575    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:01:50.416586    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:01:50.456000    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:01:50.456012    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:01:50.473503    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:01:50.473513    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:01:50.490900    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:01:50.490911    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:01:50.502295    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:01:50.502306    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:50.515215    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:50.515227    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:50.519435    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:01:50.519441    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:01:50.533625    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:01:50.533636    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:01:50.547898    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:01:50.547908    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:01:50.559873    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:50.559886    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:50.601180    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:01:50.601192    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:01:50.612936    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:50.612947    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:53.137305    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:53.139725    4100 logs.go:123] Gathering logs for etcd [213784f515d6] ...
	I0719 12:01:53.139733    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 213784f515d6"
	I0719 12:01:53.154062    4100 logs.go:123] Gathering logs for coredns [849a5406c967] ...
	I0719 12:01:53.154075    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 849a5406c967"
	I0719 12:01:53.165599    4100 logs.go:123] Gathering logs for kube-scheduler [e2faaa8a6a14] ...
	I0719 12:01:53.165611    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2faaa8a6a14"
	I0719 12:01:55.682251    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:58.139161    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:58.139376    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:58.157394    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:01:58.157482    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:58.172744    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:01:58.172815    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:58.184403    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:01:58.184474    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:58.194655    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:01:58.194721    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:58.204824    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:01:58.204892    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:58.226522    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:01:58.226597    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:58.237500    4225 logs.go:276] 0 containers: []
	W0719 12:01:58.237512    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:58.237572    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:58.248231    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:01:58.248251    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:01:58.248257    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:01:58.261736    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:01:58.261746    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:01:58.273104    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:58.273115    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:58.311728    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:58.311747    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:58.316147    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:01:58.316154    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:01:58.329670    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:01:58.329681    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:01:58.344185    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:01:58.344202    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:01:58.356553    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:01:58.356569    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:01:58.374757    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:58.374769    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:58.399614    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:01:58.399622    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:58.411499    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:58.411511    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:58.447867    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:01:58.447880    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:01:58.469847    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:01:58.469858    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:01:58.483893    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:01:58.483902    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:01:58.526254    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:01:58.526267    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:01:58.537527    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:01:58.537538    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:01:58.549113    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:01:58.549124    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:02:00.684633    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:00.684706    4100 kubeadm.go:597] duration metric: took 4m3.839195333s to restartPrimaryControlPlane
	W0719 12:02:00.684775    4100 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 12:02:00.684805    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0719 12:02:01.621544    4100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 12:02:01.626645    4100 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 12:02:01.629556    4100 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 12:02:01.632219    4100 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 12:02:01.632226    4100 kubeadm.go:157] found existing configuration files:
	
	I0719 12:02:01.632250    4100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/admin.conf
	I0719 12:02:01.634733    4100 kubeadm.go:163] "https://control-plane.minikube.internal:50327" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 12:02:01.634752    4100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 12:02:01.638561    4100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/kubelet.conf
	I0719 12:02:01.641755    4100 kubeadm.go:163] "https://control-plane.minikube.internal:50327" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 12:02:01.641776    4100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 12:02:01.644739    4100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/controller-manager.conf
	I0719 12:02:01.647186    4100 kubeadm.go:163] "https://control-plane.minikube.internal:50327" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 12:02:01.647211    4100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 12:02:01.650294    4100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/scheduler.conf
	I0719 12:02:01.653361    4100 kubeadm.go:163] "https://control-plane.minikube.internal:50327" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50327 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 12:02:01.653382    4100 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 12:02:01.656163    4100 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 12:02:01.676017    4100 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0719 12:02:01.676056    4100 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 12:02:01.728280    4100 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 12:02:01.728346    4100 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 12:02:01.728401    4100 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 12:02:01.776606    4100 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 12:02:01.779778    4100 out.go:204]   - Generating certificates and keys ...
	I0719 12:02:01.779819    4100 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 12:02:01.779853    4100 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 12:02:01.779911    4100 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 12:02:01.779948    4100 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 12:02:01.779986    4100 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 12:02:01.780013    4100 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 12:02:01.780043    4100 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 12:02:01.780072    4100 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 12:02:01.780111    4100 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 12:02:01.780149    4100 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 12:02:01.780173    4100 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 12:02:01.780204    4100 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 12:02:01.917982    4100 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 12:02:02.010414    4100 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 12:02:02.073142    4100 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 12:02:02.110641    4100 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 12:02:02.139936    4100 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 12:02:02.140251    4100 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 12:02:02.140380    4100 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 12:02:02.226511    4100 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 12:02:02.229794    4100 out.go:204]   - Booting up control plane ...
	I0719 12:02:02.230022    4100 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 12:02:02.230090    4100 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 12:02:02.230213    4100 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 12:02:02.230341    4100 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 12:02:02.230630    4100 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 12:02:01.062769    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:06.732847    4100 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502250 seconds
	I0719 12:02:06.732912    4100 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 12:02:06.736247    4100 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 12:02:07.262564    4100 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 12:02:07.263034    4100 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-589000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 12:02:07.766778    4100 kubeadm.go:310] [bootstrap-token] Using token: g0ch5u.y4j1a027fyhiu0zl
	I0719 12:02:07.769874    4100 out.go:204]   - Configuring RBAC rules ...
	I0719 12:02:07.769930    4100 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 12:02:07.769977    4100 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 12:02:07.773599    4100 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 12:02:07.774525    4100 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 12:02:07.775511    4100 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 12:02:07.776575    4100 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 12:02:07.779601    4100 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 12:02:07.952464    4100 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 12:02:08.171622    4100 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 12:02:08.171996    4100 kubeadm.go:310] 
	I0719 12:02:08.172024    4100 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 12:02:08.172028    4100 kubeadm.go:310] 
	I0719 12:02:08.172065    4100 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 12:02:08.172072    4100 kubeadm.go:310] 
	I0719 12:02:08.172084    4100 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 12:02:08.172120    4100 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 12:02:08.172147    4100 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 12:02:08.172150    4100 kubeadm.go:310] 
	I0719 12:02:08.172177    4100 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 12:02:08.172181    4100 kubeadm.go:310] 
	I0719 12:02:08.172215    4100 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 12:02:08.172220    4100 kubeadm.go:310] 
	I0719 12:02:08.172248    4100 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 12:02:08.172289    4100 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 12:02:08.172334    4100 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 12:02:08.172339    4100 kubeadm.go:310] 
	I0719 12:02:08.172382    4100 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 12:02:08.172419    4100 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 12:02:08.172424    4100 kubeadm.go:310] 
	I0719 12:02:08.172463    4100 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g0ch5u.y4j1a027fyhiu0zl \
	I0719 12:02:08.172521    4100 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:154bfbfa8204c5f233c55b5b534249e4595df04d661a5e7d0c1a65adbcc691d1 \
	I0719 12:02:08.172536    4100 kubeadm.go:310] 	--control-plane 
	I0719 12:02:08.172540    4100 kubeadm.go:310] 
	I0719 12:02:08.172579    4100 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 12:02:08.172584    4100 kubeadm.go:310] 
	I0719 12:02:08.172625    4100 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g0ch5u.y4j1a027fyhiu0zl \
	I0719 12:02:08.172685    4100 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:154bfbfa8204c5f233c55b5b534249e4595df04d661a5e7d0c1a65adbcc691d1 
	I0719 12:02:08.172748    4100 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 12:02:08.172756    4100 cni.go:84] Creating CNI manager for ""
	I0719 12:02:08.172765    4100 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 12:02:08.183178    4100 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 12:02:08.186423    4100 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 12:02:08.189480    4100 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 12:02:08.194594    4100 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 12:02:08.194650    4100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 12:02:08.194654    4100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-589000 minikube.k8s.io/updated_at=2024_07_19T12_02_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=running-upgrade-589000 minikube.k8s.io/primary=true
	I0719 12:02:08.197751    4100 ops.go:34] apiserver oom_adj: -16
	I0719 12:02:08.254627    4100 kubeadm.go:1113] duration metric: took 60.011333ms to wait for elevateKubeSystemPrivileges
	I0719 12:02:08.254736    4100 kubeadm.go:394] duration metric: took 4m11.423679834s to StartCluster
	I0719 12:02:08.254749    4100 settings.go:142] acquiring lock: {Name:mk67411000c671a58f92dc65eb422ba28279f174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:02:08.254840    4100 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 12:02:08.255208    4100 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/kubeconfig: {Name:mk4dabaac160a2c10ee03f7aa88bffdd6270bcb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:02:08.255415    4100 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:02:08.255507    4100 config.go:182] Loaded profile config "running-upgrade-589000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 12:02:08.255440    4100 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 12:02:08.255541    4100 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-589000"
	I0719 12:02:08.255541    4100 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-589000"
	I0719 12:02:08.255567    4100 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-589000"
	W0719 12:02:08.255571    4100 addons.go:243] addon storage-provisioner should already be in state true
	I0719 12:02:08.255555    4100 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-589000"
	I0719 12:02:08.255581    4100 host.go:66] Checking if "running-upgrade-589000" exists ...
	I0719 12:02:08.256525    4100 kapi.go:59] client config for running-upgrade-589000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/running-upgrade-589000/client.key", CAFile:"/Users/jenkins/minikube-integration/19307-1066/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106227790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 12:02:08.256639    4100 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-589000"
	W0719 12:02:08.256644    4100 addons.go:243] addon default-storageclass should already be in state true
	I0719 12:02:08.256651    4100 host.go:66] Checking if "running-upgrade-589000" exists ...
	I0719 12:02:08.259189    4100 out.go:177] * Verifying Kubernetes components...
	I0719 12:02:08.259494    4100 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 12:02:08.263456    4100 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 12:02:08.263464    4100 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/running-upgrade-589000/id_rsa Username:docker}
	I0719 12:02:08.267213    4100 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 12:02:06.065031    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:06.065394    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:02:06.095385    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:02:06.095518    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:02:06.115073    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:02:06.115168    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:02:06.133751    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:02:06.133825    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:02:06.145914    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:02:06.145976    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:02:06.156891    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:02:06.156954    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:02:06.167754    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:02:06.167823    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:02:06.178215    4225 logs.go:276] 0 containers: []
	W0719 12:02:06.178226    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:02:06.178281    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:02:06.189081    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:02:06.189103    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:02:06.189108    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:02:06.227135    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:02:06.227145    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:02:06.241022    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:02:06.241034    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:02:06.260352    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:02:06.260363    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:02:06.272885    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:02:06.272896    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:02:06.287275    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:02:06.287286    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:02:06.299487    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:02:06.299504    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:02:06.314298    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:02:06.314311    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:02:06.350370    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:02:06.350379    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:02:06.354447    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:02:06.354456    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:02:06.390950    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:02:06.390961    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:02:06.404785    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:02:06.404800    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:02:06.417333    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:02:06.417345    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:02:06.432760    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:02:06.432773    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:02:06.448783    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:02:06.448795    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:02:06.461392    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:02:06.461405    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:02:06.486110    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:02:06.486128    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:02:09.001710    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:08.271237    4100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:02:08.275266    4100 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 12:02:08.275273    4100 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 12:02:08.275278    4100 sshutil.go:53] new ssh client: &{IP:localhost Port:50256 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/running-upgrade-589000/id_rsa Username:docker}
	I0719 12:02:08.370030    4100 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 12:02:08.375623    4100 api_server.go:52] waiting for apiserver process to appear ...
	I0719 12:02:08.375662    4100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 12:02:08.382426    4100 api_server.go:72] duration metric: took 127.00025ms to wait for apiserver process to appear ...
	I0719 12:02:08.382437    4100 api_server.go:88] waiting for apiserver healthz status ...
	I0719 12:02:08.382446    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:08.398532    4100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 12:02:08.415623    4100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 12:02:14.004020    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:14.004330    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:02:14.035031    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:02:14.035153    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:02:14.054384    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:02:14.054480    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:02:14.068474    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:02:14.068549    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:02:14.081210    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:02:14.081283    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:02:14.094920    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:02:14.094992    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:02:14.105500    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:02:14.105576    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:02:14.115483    4225 logs.go:276] 0 containers: []
	W0719 12:02:14.115492    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:02:14.115540    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:02:14.125995    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:02:14.126014    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:02:14.126020    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:02:14.164596    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:02:14.164610    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:02:14.206437    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:02:14.206450    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:02:14.218241    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:02:14.218255    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:02:14.239500    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:02:14.239513    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:02:14.253014    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:02:14.253027    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:02:14.264897    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:02:14.264912    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:02:14.280012    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:02:14.280023    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:02:14.304951    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:02:14.304963    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:02:14.309020    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:02:14.309026    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:02:14.343260    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:02:14.343274    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:02:14.359899    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:02:14.359910    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:02:14.374233    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:02:14.374245    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:02:14.385553    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:02:14.385565    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:02:14.399220    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:02:14.399232    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:02:14.413589    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:02:14.413602    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:02:14.424485    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:02:14.424497    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:02:13.383111    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:13.383167    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:16.936591    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:18.384408    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:18.384451    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:21.938803    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:21.938965    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:02:21.951784    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:02:21.951842    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:02:21.963205    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:02:21.963276    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:02:21.978457    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:02:21.978527    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:02:21.988950    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:02:21.989025    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:02:22.000751    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:02:22.000817    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:02:22.016991    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:02:22.017076    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:02:22.027222    4225 logs.go:276] 0 containers: []
	W0719 12:02:22.027233    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:02:22.027287    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:02:22.038427    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:02:22.038447    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:02:22.038454    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:02:22.049859    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:02:22.049872    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:02:22.061891    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:02:22.061908    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:02:22.096542    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:02:22.096553    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:02:22.113731    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:02:22.113764    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:02:22.128218    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:02:22.128230    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:02:22.140054    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:02:22.140064    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:02:22.152200    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:02:22.152211    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:02:22.189971    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:02:22.189979    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:02:22.202403    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:02:22.202416    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:02:22.220429    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:02:22.220444    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:02:22.244751    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:02:22.244759    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:02:22.258050    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:02:22.258061    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:02:22.269405    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:02:22.269417    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:02:22.280802    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:02:22.280814    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:02:22.285310    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:02:22.285317    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:02:22.323135    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:02:22.323150    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:02:23.384617    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:23.384636    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:24.839881    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:28.384836    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:28.384863    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:29.842420    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:29.842785    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:02:29.883433    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:02:29.883581    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:02:29.906139    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:02:29.906248    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:02:29.920941    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:02:29.921016    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:02:29.933548    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:02:29.933621    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:02:29.944463    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:02:29.944534    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:02:29.955789    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:02:29.955856    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:02:29.971501    4225 logs.go:276] 0 containers: []
	W0719 12:02:29.971517    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:02:29.971575    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:02:29.982146    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:02:29.982165    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:02:29.982170    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:02:29.999409    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:02:29.999419    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:02:30.016914    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:02:30.016926    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:02:30.041742    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:02:30.041752    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:02:30.081610    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:02:30.081625    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:02:30.100864    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:02:30.100873    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:02:30.111961    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:02:30.111975    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:02:30.123927    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:02:30.123937    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:02:30.138092    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:02:30.138102    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:02:30.155683    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:02:30.155695    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:02:30.167776    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:02:30.167790    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:02:30.182337    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:02:30.182348    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:02:30.225020    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:02:30.225032    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:02:30.237057    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:02:30.237068    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:02:30.248446    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:02:30.248459    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:02:30.252912    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:02:30.252920    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:02:30.289051    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:02:30.289065    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:02:32.803083    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:33.385188    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:33.385239    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:38.386036    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:38.386060    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0719 12:02:38.708452    4100 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0719 12:02:38.712658    4100 out.go:177] * Enabled addons: storage-provisioner
	I0719 12:02:37.805771    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:37.806221    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:02:37.846664    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:02:37.846833    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:02:37.872808    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:02:37.872898    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:02:37.887259    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:02:37.887334    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:02:37.899419    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:02:37.899486    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:02:37.910089    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:02:37.910152    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:02:37.920840    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:02:37.920912    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:02:37.931119    4225 logs.go:276] 0 containers: []
	W0719 12:02:37.931129    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:02:37.931190    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:02:37.941556    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:02:37.941574    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:02:37.941580    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:02:37.953623    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:02:37.953636    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:02:37.968379    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:02:37.968395    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:02:37.980463    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:02:37.980476    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:02:37.998348    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:02:37.998362    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:02:38.032348    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:02:38.032360    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:02:38.076139    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:02:38.076156    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:02:38.092633    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:02:38.092646    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:02:38.132056    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:02:38.132064    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:02:38.143251    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:02:38.143265    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:02:38.154821    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:02:38.154834    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:02:38.169444    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:02:38.169453    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:02:38.193552    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:02:38.193561    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:02:38.207891    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:02:38.207899    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:02:38.221897    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:02:38.221910    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:02:38.239232    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:02:38.239245    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:02:38.251039    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:02:38.251052    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:02:38.720571    4100 addons.go:510] duration metric: took 30.465557167s for enable addons: enabled=[storage-provisioner]
	I0719 12:02:40.757401    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:43.386719    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:43.386768    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:45.759715    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:45.759865    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:02:45.772794    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:02:45.772880    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:02:45.783582    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:02:45.783650    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:02:45.794101    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:02:45.794165    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:02:45.814421    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:02:45.814497    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:02:45.824396    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:02:45.824465    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:02:45.834854    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:02:45.834918    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:02:45.845066    4225 logs.go:276] 0 containers: []
	W0719 12:02:45.845081    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:02:45.845142    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:02:45.856561    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:02:45.856578    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:02:45.856583    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:02:45.868981    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:02:45.868993    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:02:45.907111    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:02:45.907118    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:02:45.942282    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:02:45.942294    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:02:45.956611    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:02:45.956621    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:02:45.968781    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:02:45.968793    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:02:45.980767    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:02:45.980777    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:02:45.998509    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:02:45.998520    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:02:46.022509    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:02:46.022516    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:02:46.038152    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:02:46.038163    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:02:46.052524    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:02:46.052536    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:02:46.065147    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:02:46.065158    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:02:46.076632    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:02:46.076641    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:02:46.081186    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:02:46.081192    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:02:46.101281    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:02:46.101291    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:02:46.118267    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:02:46.118278    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:02:46.155042    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:02:46.155051    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:02:48.668741    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:48.387461    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:48.387509    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:53.669466    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:53.669625    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:02:53.683694    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:02:53.683771    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:02:53.695587    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:02:53.695656    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:02:53.706128    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:02:53.706189    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:02:53.716846    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:02:53.716915    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:02:53.728668    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:02:53.728732    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:02:53.739180    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:02:53.739249    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:02:53.749884    4225 logs.go:276] 0 containers: []
	W0719 12:02:53.749899    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:02:53.749952    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:02:53.760262    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:02:53.760279    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:02:53.760285    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:02:53.797158    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:02:53.797167    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:02:53.817932    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:02:53.817944    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:02:53.829301    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:02:53.829311    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:02:53.841559    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:02:53.841570    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:02:53.856573    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:02:53.856584    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:02:53.894631    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:02:53.894642    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:02:53.909066    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:02:53.909076    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:02:53.931459    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:02:53.931467    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:02:53.943637    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:02:53.943648    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:02:53.948535    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:02:53.948546    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:02:53.960756    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:02:53.960767    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:02:53.978588    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:02:53.978599    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:02:53.995620    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:02:53.995631    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:02:54.007169    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:02:54.007179    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:02:54.041952    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:02:54.041962    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:02:54.056316    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:02:54.056329    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:02:53.388615    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:53.388648    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:56.573351    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:58.390053    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:58.390089    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:01.575594    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:01.575813    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:01.594695    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:03:01.594760    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:01.605413    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:03:01.605484    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:01.615682    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:03:01.615754    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:01.626583    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:03:01.626657    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:01.636911    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:03:01.636981    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:01.648616    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:03:01.648687    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:01.659540    4225 logs.go:276] 0 containers: []
	W0719 12:03:01.659551    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:01.659609    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:01.670212    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:03:01.670229    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:03:01.670235    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:03:01.681699    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:03:01.681711    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:03:01.693500    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:03:01.693510    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:03:01.710836    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:03:01.710847    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:03:01.725559    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:01.725569    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:01.749013    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:03:01.749023    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:03:01.763649    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:03:01.763662    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:03:01.775273    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:03:01.775289    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:01.787479    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:01.787489    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:01.792246    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:01.792255    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:01.827448    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:03:01.827459    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:03:01.841629    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:03:01.841642    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:03:01.854000    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:03:01.854011    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:03:01.873279    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:01.873289    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:01.911039    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:03:01.911047    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:03:01.948462    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:03:01.948473    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:03:01.962995    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:03:01.963008    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:03:04.477068    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:03.391838    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:03.391894    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:09.479254    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:09.479494    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:09.501659    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:03:09.501771    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:09.519587    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:03:09.519655    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:09.531883    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:03:09.531949    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:09.542537    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:03:09.542606    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:09.552964    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:03:09.553024    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:09.567584    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:03:09.567647    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:09.581775    4225 logs.go:276] 0 containers: []
	W0719 12:03:09.581786    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:09.581835    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:09.591998    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:03:09.592018    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:03:09.592023    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:03:09.609246    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:03:09.609256    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:03:09.621266    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:03:09.621282    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:09.635424    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:09.635436    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:09.672048    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:09.672056    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:09.708596    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:03:09.708606    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:03:09.723434    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:03:09.723447    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:03:09.735136    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:03:09.735148    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:03:09.753936    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:03:09.753950    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:03:09.767152    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:03:09.767164    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:03:09.782123    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:03:09.782137    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:03:09.794112    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:09.794123    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:09.816486    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:09.816494    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:09.820757    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:03:09.820766    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:03:08.394126    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:08.394247    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:08.415316    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:03:08.415391    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:08.427090    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:03:08.427153    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:08.437780    4100 logs.go:276] 2 containers: [15a4c09ca72b 18f4d73ea1b1]
	I0719 12:03:08.437850    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:08.447669    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:03:08.447731    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:08.460784    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:03:08.460867    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:08.471307    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:03:08.471373    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:08.481343    4100 logs.go:276] 0 containers: []
	W0719 12:03:08.481358    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:08.481413    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:08.491522    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:03:08.491537    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:08.491542    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:08.524905    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:03:08.524917    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:03:08.538344    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:03:08.538357    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:03:08.549987    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:03:08.549999    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:03:08.562026    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:08.562036    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:08.585490    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:08.585499    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:08.590011    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:08.590019    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:08.629390    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:03:08.629402    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:03:08.644008    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:03:08.644020    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:03:08.658101    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:03:08.658114    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:03:08.669652    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:03:08.669662    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:03:08.684599    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:03:08.684611    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:03:08.702162    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:03:08.702172    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:11.216045    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:09.857792    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:03:09.857806    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:03:09.869765    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:03:09.869779    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:03:09.881249    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:03:09.881259    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:03:12.395032    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:16.218834    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:16.219043    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:16.239648    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:03:16.239776    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:16.254652    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:03:16.254727    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:16.267114    4100 logs.go:276] 2 containers: [15a4c09ca72b 18f4d73ea1b1]
	I0719 12:03:16.267185    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:16.278475    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:03:16.278541    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:16.289171    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:03:16.289243    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:16.299115    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:03:16.299171    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:16.310153    4100 logs.go:276] 0 containers: []
	W0719 12:03:16.310169    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:16.310335    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:16.321613    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:03:16.321629    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:03:16.321634    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:03:16.335913    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:03:16.335929    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:03:16.350441    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:03:16.350456    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:03:16.362646    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:03:16.362659    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:03:16.374433    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:16.374447    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:16.398125    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:16.398133    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:16.431400    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:16.431407    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:16.435970    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:16.435978    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:16.470553    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:03:16.470562    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:03:16.484466    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:03:16.484481    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:03:16.496209    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:03:16.496220    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:03:16.507405    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:03:16.507415    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:03:16.525249    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:03:16.525259    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:17.397370    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:17.397593    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:17.424291    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:03:17.424393    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:17.444095    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:03:17.444162    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:17.458777    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:03:17.458845    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:17.469764    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:03:17.469832    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:17.479518    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:03:17.479588    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:17.490626    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:03:17.490689    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:17.500701    4225 logs.go:276] 0 containers: []
	W0719 12:03:17.500715    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:17.500766    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:17.512963    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:03:17.512980    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:03:17.512988    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:03:17.527442    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:17.527454    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:17.551247    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:03:17.551258    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:17.563702    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:03:17.563714    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:03:17.578167    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:03:17.578177    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:03:17.590423    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:03:17.590434    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:03:17.602546    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:03:17.602557    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:03:17.616620    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:03:17.616632    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:03:17.628395    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:03:17.628406    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:03:17.646394    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:17.646404    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:17.685605    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:17.685618    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:17.689820    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:17.689826    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:17.724267    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:03:17.724279    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:03:17.737946    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:03:17.737956    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:03:17.750269    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:03:17.750280    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:03:17.761839    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:03:17.761853    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:03:17.799288    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:03:17.799298    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:03:19.038788    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:20.313228    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:24.041194    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:24.041553    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:24.084611    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:03:24.084716    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:24.103835    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:03:24.103913    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:24.115594    4100 logs.go:276] 2 containers: [15a4c09ca72b 18f4d73ea1b1]
	I0719 12:03:24.115666    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:24.126632    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:03:24.126703    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:24.137671    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:03:24.137740    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:24.148006    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:03:24.148069    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:24.158239    4100 logs.go:276] 0 containers: []
	W0719 12:03:24.158250    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:24.158304    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:24.168986    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:03:24.169002    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:03:24.169008    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:03:24.182783    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:03:24.182794    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:03:24.194693    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:03:24.194708    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:03:24.207281    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:03:24.207290    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:03:24.222292    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:03:24.222303    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:03:24.234008    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:24.234021    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:24.269507    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:24.269515    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:24.274229    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:24.274238    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:24.309374    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:24.309385    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:24.335409    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:03:24.335421    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:24.347016    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:03:24.347028    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:03:24.362131    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:03:24.362146    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:03:24.374217    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:03:24.374230    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:03:26.893507    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:25.315611    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:25.315908    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:25.350683    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:03:25.350843    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:25.369740    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:03:25.369851    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:25.388465    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:03:25.388527    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:25.400473    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:03:25.400541    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:25.410978    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:03:25.411043    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:25.422184    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:03:25.422245    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:25.433716    4225 logs.go:276] 0 containers: []
	W0719 12:03:25.433729    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:25.433786    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:25.449711    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:03:25.449730    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:03:25.449735    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:03:25.464737    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:03:25.464750    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:03:25.476284    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:03:25.476294    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:03:25.488413    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:25.488425    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:25.525969    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:03:25.525981    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:03:25.540083    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:03:25.540093    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:03:25.558724    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:25.558734    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:25.582378    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:03:25.582385    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:03:25.594092    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:03:25.594101    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:25.606155    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:25.606171    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:25.644966    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:25.644978    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:25.649431    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:03:25.649437    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:03:25.663782    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:03:25.663793    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:03:25.703211    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:03:25.703221    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:03:25.718761    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:03:25.718771    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:03:25.731032    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:03:25.731043    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:03:25.742672    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:03:25.742683    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:03:28.256234    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:31.895233    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:31.895404    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:31.908661    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:03:31.908746    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:31.920333    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:03:31.920397    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:31.930184    4100 logs.go:276] 2 containers: [15a4c09ca72b 18f4d73ea1b1]
	I0719 12:03:31.930253    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:31.940801    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:03:31.940863    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:31.953929    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:03:31.953999    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:31.964717    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:03:31.964779    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:31.974903    4100 logs.go:276] 0 containers: []
	W0719 12:03:31.974913    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:31.974962    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:31.985741    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:03:31.985756    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:31.985761    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:32.021036    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:03:32.021052    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:03:32.035102    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:03:32.035115    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:03:32.050562    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:03:32.050574    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:03:32.062278    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:03:32.062289    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:03:32.079392    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:32.079403    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:32.104311    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:32.104318    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:32.139325    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:32.139333    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:32.143834    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:03:32.143842    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:03:32.154969    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:03:32.154982    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:03:32.173361    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:03:32.173375    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:32.184491    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:03:32.184501    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:03:32.199563    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:03:32.199576    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:03:33.258594    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:33.258757    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:33.275130    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:03:33.275218    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:33.289695    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:03:33.289770    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:33.299995    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:03:33.300060    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:33.310543    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:03:33.310608    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:33.321243    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:03:33.321309    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:33.332599    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:03:33.332678    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:33.343374    4225 logs.go:276] 0 containers: []
	W0719 12:03:33.343387    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:33.343439    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:33.354561    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:03:33.354579    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:03:33.354584    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:03:33.366227    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:33.366239    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:33.404375    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:03:33.404384    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:03:33.444426    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:03:33.444442    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:03:33.458974    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:03:33.458991    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:03:33.470439    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:33.470455    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:33.506337    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:03:33.506352    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:03:33.520407    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:33.520422    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:33.524685    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:03:33.524693    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:03:33.536676    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:03:33.536692    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:03:33.551842    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:03:33.551851    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:03:33.564219    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:33.564234    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:33.586104    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:03:33.586112    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:33.598491    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:03:33.598508    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:03:33.612985    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:03:33.613000    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:03:33.627215    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:03:33.627227    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:03:33.638505    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:03:33.638516    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:03:34.713176    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:36.158392    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:39.715366    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:39.715516    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:39.729914    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:03:39.729990    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:39.741424    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:03:39.741486    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:39.756181    4100 logs.go:276] 2 containers: [15a4c09ca72b 18f4d73ea1b1]
	I0719 12:03:39.756252    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:39.766860    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:03:39.766925    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:39.777546    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:03:39.777612    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:39.792271    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:03:39.792334    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:39.802865    4100 logs.go:276] 0 containers: []
	W0719 12:03:39.802881    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:39.802943    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:39.813643    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:03:39.813661    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:39.813666    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:39.847167    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:39.847178    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:39.852111    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:03:39.852118    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:03:39.864312    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:03:39.864322    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:03:39.876368    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:39.876379    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:39.900249    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:39.900262    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:39.934039    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:03:39.934051    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:03:39.951992    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:03:39.952000    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:03:39.965932    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:03:39.965944    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:03:39.977854    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:03:39.977865    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:03:39.992937    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:03:39.992954    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:03:40.011153    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:03:40.011163    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:03:40.023421    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:03:40.023432    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:42.537081    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:41.160756    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:41.161032    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:41.186563    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:03:41.186677    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:41.203550    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:03:41.203633    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:41.217587    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:03:41.217658    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:41.228513    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:03:41.228585    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:41.239090    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:03:41.239149    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:41.249726    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:03:41.249787    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:41.260012    4225 logs.go:276] 0 containers: []
	W0719 12:03:41.260025    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:41.260084    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:41.270035    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:03:41.270056    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:41.270062    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:41.307321    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:03:41.307335    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:03:41.321661    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:03:41.321672    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:03:41.337575    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:03:41.337586    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:03:41.349354    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:41.349367    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:41.353454    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:03:41.353463    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:03:41.367211    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:03:41.367223    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:03:41.378793    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:03:41.378806    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:03:41.390471    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:03:41.390483    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:03:41.404713    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:03:41.404723    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:03:41.416154    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:03:41.416167    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:41.427993    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:03:41.428004    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:03:41.446181    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:41.446192    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:41.468188    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:41.468195    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:41.504924    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:03:41.504933    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:03:41.543364    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:03:41.543375    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:03:41.555711    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:03:41.555723    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:03:44.072591    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:47.539628    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:47.539993    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:47.573749    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:03:47.573882    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:47.594610    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:03:47.594702    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:47.608665    4100 logs.go:276] 2 containers: [15a4c09ca72b 18f4d73ea1b1]
	I0719 12:03:47.608741    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:47.620819    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:03:47.620886    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:47.637270    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:03:47.637340    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:47.647642    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:03:47.647714    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:47.657879    4100 logs.go:276] 0 containers: []
	W0719 12:03:47.657894    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:47.657948    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:47.668745    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:03:47.668781    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:47.668787    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:47.673507    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:47.673514    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:47.767883    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:03:47.767897    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:03:47.781799    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:03:47.781810    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:03:47.793146    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:03:47.793157    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:03:47.804854    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:03:47.804868    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:47.817848    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:47.817861    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:47.854673    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:03:47.854688    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:03:47.869532    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:03:47.869544    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:03:47.881668    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:03:47.881681    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:03:47.896564    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:03:47.896577    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:03:47.908617    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:03:47.908630    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:03:47.926412    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:47.926423    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:49.074893    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:49.075160    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:49.106721    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:03:49.106843    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:49.125000    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:03:49.125095    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:49.139821    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:03:49.139886    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:49.151641    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:03:49.151713    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:49.162534    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:03:49.162611    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:49.172838    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:03:49.172912    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:49.184957    4225 logs.go:276] 0 containers: []
	W0719 12:03:49.184967    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:49.185022    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:49.195681    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:03:49.195700    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:03:49.195705    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:03:49.212306    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:03:49.212317    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:03:49.227656    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:49.227666    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:49.232357    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:03:49.232365    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:03:49.270019    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:03:49.270029    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:03:49.282373    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:03:49.282388    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:03:49.293873    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:49.293884    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:49.315566    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:03:49.315572    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:03:49.333321    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:03:49.333331    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:03:49.347130    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:49.347141    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:49.384903    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:49.384912    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:49.427678    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:03:49.427692    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:03:49.442625    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:03:49.442635    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:03:49.458800    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:03:49.458811    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:03:49.471612    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:03:49.471620    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:03:49.488747    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:03:49.488763    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:03:49.500245    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:03:49.500256    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:50.453525    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:52.015345    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:55.456197    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:55.456615    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:55.491492    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:03:55.491627    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:55.512708    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:03:55.512799    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:55.529998    4100 logs.go:276] 2 containers: [15a4c09ca72b 18f4d73ea1b1]
	I0719 12:03:55.530076    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:55.542293    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:03:55.542364    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:55.554948    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:03:55.555012    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:55.565949    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:03:55.566016    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:55.580652    4100 logs.go:276] 0 containers: []
	W0719 12:03:55.580665    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:55.580722    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:55.591317    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:03:55.591331    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:03:55.591339    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:03:55.603147    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:03:55.603159    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:03:55.618274    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:03:55.618284    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:03:55.630389    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:03:55.630399    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:03:55.642473    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:55.642487    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:55.667563    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:03:55.667574    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:55.678774    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:55.678784    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:55.716242    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:55.716250    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:55.754616    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:03:55.754629    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:03:55.769014    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:03:55.769026    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:03:55.780459    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:03:55.780470    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:03:55.799715    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:55.799725    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:55.804377    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:03:55.804385    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:03:57.017632    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:57.017699    4225 kubeadm.go:597] duration metric: took 4m3.907245916s to restartPrimaryControlPlane
	W0719 12:03:57.017759    4225 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 12:03:57.017788    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0719 12:03:58.033917    4225 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.016131167s)
	I0719 12:03:58.033981    4225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 12:03:58.039042    4225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 12:03:58.042286    4225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 12:03:58.045285    4225 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 12:03:58.045292    4225 kubeadm.go:157] found existing configuration files:
	
	I0719 12:03:58.045314    4225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf
	I0719 12:03:58.047861    4225 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 12:03:58.047887    4225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 12:03:58.050955    4225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf
	I0719 12:03:58.054003    4225 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 12:03:58.054039    4225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 12:03:58.056614    4225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf
	I0719 12:03:58.059168    4225 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 12:03:58.059190    4225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 12:03:58.062180    4225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf
	I0719 12:03:58.065034    4225 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 12:03:58.065060    4225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 12:03:58.067564    4225 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 12:03:58.131818    4225 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 12:03:58.321417    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:04.950482    4225 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0719 12:04:04.950512    4225 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 12:04:04.950556    4225 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 12:04:04.950610    4225 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 12:04:04.950669    4225 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 12:04:04.950715    4225 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 12:04:04.953992    4225 out.go:204]   - Generating certificates and keys ...
	I0719 12:04:04.954029    4225 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 12:04:04.954066    4225 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 12:04:04.954104    4225 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 12:04:04.954137    4225 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 12:04:04.954184    4225 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 12:04:04.954217    4225 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 12:04:04.954254    4225 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 12:04:04.954291    4225 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 12:04:04.954334    4225 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 12:04:04.954382    4225 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 12:04:04.954407    4225 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 12:04:04.954438    4225 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 12:04:04.954476    4225 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 12:04:04.954506    4225 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 12:04:04.954540    4225 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 12:04:04.954579    4225 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 12:04:04.954635    4225 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 12:04:04.954679    4225 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 12:04:04.954700    4225 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 12:04:04.954745    4225 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 12:04:04.965021    4225 out.go:204]   - Booting up control plane ...
	I0719 12:04:04.965058    4225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 12:04:04.965111    4225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 12:04:04.965150    4225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 12:04:04.965192    4225 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 12:04:04.965276    4225 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 12:04:04.965317    4225 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503086 seconds
	I0719 12:04:04.965391    4225 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 12:04:04.965455    4225 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 12:04:04.965491    4225 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 12:04:04.965596    4225 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-275000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 12:04:04.965627    4225 kubeadm.go:310] [bootstrap-token] Using token: g8q9zb.vvtlr4dftj1by9c6
	I0719 12:04:04.969008    4225 out.go:204]   - Configuring RBAC rules ...
	I0719 12:04:04.969056    4225 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 12:04:04.969108    4225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 12:04:04.969187    4225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 12:04:04.969253    4225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 12:04:04.969319    4225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 12:04:04.969375    4225 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 12:04:04.969449    4225 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 12:04:04.969472    4225 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 12:04:04.969503    4225 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 12:04:04.969505    4225 kubeadm.go:310] 
	I0719 12:04:04.969548    4225 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 12:04:04.969552    4225 kubeadm.go:310] 
	I0719 12:04:04.969591    4225 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 12:04:04.969593    4225 kubeadm.go:310] 
	I0719 12:04:04.969605    4225 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 12:04:04.969636    4225 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 12:04:04.969669    4225 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 12:04:04.969673    4225 kubeadm.go:310] 
	I0719 12:04:04.969698    4225 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 12:04:04.969702    4225 kubeadm.go:310] 
	I0719 12:04:04.969731    4225 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 12:04:04.969734    4225 kubeadm.go:310] 
	I0719 12:04:04.969773    4225 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 12:04:04.969822    4225 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 12:04:04.969863    4225 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 12:04:04.969868    4225 kubeadm.go:310] 
	I0719 12:04:04.969926    4225 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 12:04:04.969978    4225 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 12:04:04.969981    4225 kubeadm.go:310] 
	I0719 12:04:04.970027    4225 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g8q9zb.vvtlr4dftj1by9c6 \
	I0719 12:04:04.970088    4225 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:154bfbfa8204c5f233c55b5b534249e4595df04d661a5e7d0c1a65adbcc691d1 \
	I0719 12:04:04.970102    4225 kubeadm.go:310] 	--control-plane 
	I0719 12:04:04.970105    4225 kubeadm.go:310] 
	I0719 12:04:04.970156    4225 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 12:04:04.970162    4225 kubeadm.go:310] 
	I0719 12:04:04.970200    4225 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g8q9zb.vvtlr4dftj1by9c6 \
	I0719 12:04:04.970347    4225 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:154bfbfa8204c5f233c55b5b534249e4595df04d661a5e7d0c1a65adbcc691d1 
	I0719 12:04:04.970359    4225 cni.go:84] Creating CNI manager for ""
	I0719 12:04:04.970368    4225 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 12:04:04.979923    4225 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 12:04:04.983058    4225 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 12:04:04.986262    4225 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 12:04:04.990982    4225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 12:04:04.991020    4225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 12:04:04.991040    4225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-275000 minikube.k8s.io/updated_at=2024_07_19T12_04_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=stopped-upgrade-275000 minikube.k8s.io/primary=true
	I0719 12:04:05.031369    4225 kubeadm.go:1113] duration metric: took 40.380834ms to wait for elevateKubeSystemPrivileges
	I0719 12:04:05.031384    4225 ops.go:34] apiserver oom_adj: -16
	I0719 12:04:05.031389    4225 kubeadm.go:394] duration metric: took 4m11.93582875s to StartCluster
	I0719 12:04:05.031397    4225 settings.go:142] acquiring lock: {Name:mk67411000c671a58f92dc65eb422ba28279f174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:04:05.031484    4225 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 12:04:05.031901    4225 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/kubeconfig: {Name:mk4dabaac160a2c10ee03f7aa88bffdd6270bcb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:04:05.032101    4225 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:04:05.032206    4225 config.go:182] Loaded profile config "stopped-upgrade-275000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 12:04:05.032160    4225 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 12:04:05.032229    4225 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-275000"
	I0719 12:04:05.032239    4225 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-275000"
	I0719 12:04:05.032243    4225 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-275000"
	W0719 12:04:05.032246    4225 addons.go:243] addon storage-provisioner should already be in state true
	I0719 12:04:05.032250    4225 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-275000"
	I0719 12:04:05.032258    4225 host.go:66] Checking if "stopped-upgrade-275000" exists ...
	I0719 12:04:05.035950    4225 out.go:177] * Verifying Kubernetes components...
	I0719 12:04:05.036581    4225 kapi.go:59] client config for stopped-upgrade-275000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/client.key", CAFile:"/Users/jenkins/minikube-integration/19307-1066/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101a87790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 12:04:05.040351    4225 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-275000"
	W0719 12:04:05.040356    4225 addons.go:243] addon default-storageclass should already be in state true
	I0719 12:04:05.040362    4225 host.go:66] Checking if "stopped-upgrade-275000" exists ...
	I0719 12:04:05.040873    4225 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 12:04:05.040878    4225 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 12:04:05.040890    4225 sshutil.go:53] new ssh client: &{IP:localhost Port:50502 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/stopped-upgrade-275000/id_rsa Username:docker}
	I0719 12:04:05.043984    4225 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 12:04:03.323572    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:03.323732    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:04:03.334845    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:04:03.334914    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:04:03.346724    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:04:03.346805    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:04:03.358064    4100 logs.go:276] 2 containers: [15a4c09ca72b 18f4d73ea1b1]
	I0719 12:04:03.358137    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:04:03.373415    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:04:03.373482    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:04:03.383939    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:04:03.384012    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:04:03.394763    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:04:03.394830    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:04:03.405609    4100 logs.go:276] 0 containers: []
	W0719 12:04:03.405623    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:04:03.405679    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:04:03.416266    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:04:03.416281    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:04:03.416286    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:04:03.428262    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:04:03.428273    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:04:03.453336    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:04:03.453349    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:04:03.466021    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:04:03.466031    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:04:03.501897    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:04:03.501904    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:04:03.506733    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:04:03.506739    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:04:03.518648    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:04:03.518659    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:04:03.531780    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:04:03.531790    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:04:03.543358    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:04:03.543373    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:04:03.559144    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:04:03.559154    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:04:03.576771    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:04:03.576785    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:04:03.618680    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:04:03.618691    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:04:03.633612    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:04:03.633621    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:04:06.149569    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:05.048212    4225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:04:05.052044    4225 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 12:04:05.052051    4225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 12:04:05.052057    4225 sshutil.go:53] new ssh client: &{IP:localhost Port:50502 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/stopped-upgrade-275000/id_rsa Username:docker}
	I0719 12:04:05.118775    4225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 12:04:05.124677    4225 api_server.go:52] waiting for apiserver process to appear ...
	I0719 12:04:05.124722    4225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 12:04:05.128620    4225 api_server.go:72] duration metric: took 96.510083ms to wait for apiserver process to appear ...
	I0719 12:04:05.128627    4225 api_server.go:88] waiting for apiserver healthz status ...
	I0719 12:04:05.128634    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:05.159400    4225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 12:04:05.185279    4225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 12:04:11.151696    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:11.151806    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:04:11.165498    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:04:11.165576    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:04:11.177392    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:04:11.177454    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:04:11.188083    4100 logs.go:276] 2 containers: [15a4c09ca72b 18f4d73ea1b1]
	I0719 12:04:11.188153    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:04:11.198530    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:04:11.198592    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:04:11.209490    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:04:11.209552    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:04:11.219673    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:04:11.219744    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:04:11.230407    4100 logs.go:276] 0 containers: []
	W0719 12:04:11.230419    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:04:11.230473    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:04:11.241314    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:04:11.241329    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:04:11.241334    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:04:11.253102    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:04:11.253112    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:04:11.267759    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:04:11.267772    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:04:11.291201    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:04:11.291211    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:04:11.324588    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:04:11.324599    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:04:11.338931    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:04:11.338941    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:04:11.360713    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:04:11.360724    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:04:11.372115    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:04:11.372125    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:04:11.389521    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:04:11.389531    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:04:11.405034    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:04:11.405047    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:04:11.416601    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:04:11.416613    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:04:11.421328    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:04:11.421338    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:04:11.456952    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:04:11.456965    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:04:10.130725    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:10.130787    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:13.977217    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:15.131006    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:15.131037    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:18.979493    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:18.979636    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:04:18.991130    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:04:18.991223    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:04:19.002695    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:04:19.002765    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:04:19.013891    4100 logs.go:276] 2 containers: [15a4c09ca72b 18f4d73ea1b1]
	I0719 12:04:19.013962    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:04:19.025479    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:04:19.025544    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:04:19.036992    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:04:19.037069    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:04:19.052828    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:04:19.052902    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:04:19.064722    4100 logs.go:276] 0 containers: []
	W0719 12:04:19.064734    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:04:19.064796    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:04:19.076411    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:04:19.076428    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:04:19.076434    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:04:19.088967    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:04:19.088980    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:04:19.101640    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:04:19.101652    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:04:19.114397    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:04:19.114409    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:04:19.154215    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:04:19.154228    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:04:19.169722    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:04:19.169737    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:04:19.184175    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:04:19.184187    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:04:19.196331    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:04:19.196346    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:04:19.214428    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:04:19.214441    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:04:19.239086    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:04:19.239097    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:04:19.272340    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:04:19.272352    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:04:19.276783    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:04:19.276790    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:04:19.290115    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:04:19.290128    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:04:21.807328    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:20.131262    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:20.131291    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:26.809661    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:26.810001    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:04:26.842189    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:04:26.842302    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:04:26.862221    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:04:26.862309    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:04:26.875699    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:04:26.875777    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:04:26.886677    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:04:26.886743    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:04:26.897453    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:04:26.897513    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:04:26.907552    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:04:26.907615    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:04:26.917946    4100 logs.go:276] 0 containers: []
	W0719 12:04:26.917957    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:04:26.918007    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:04:26.928588    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:04:26.928605    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:04:26.928611    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:04:26.940048    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:04:26.940061    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:04:26.960592    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:04:26.960605    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:04:26.973069    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:04:26.973080    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:04:26.978186    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:04:26.978195    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:04:26.989746    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:04:26.989757    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:04:27.004999    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:04:27.005012    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:04:27.016413    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:04:27.016424    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:04:27.028501    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:04:27.028512    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:04:27.063342    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:04:27.063352    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:04:27.086315    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:04:27.086324    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:04:27.103785    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:04:27.103795    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:04:27.117970    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:04:27.117983    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:04:27.132743    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:04:27.132754    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:04:27.156247    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:04:27.156257    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:04:25.131638    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:25.131682    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:29.695175    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:30.132209    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:30.132235    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:35.132853    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:35.132887    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0719 12:04:35.504308    4225 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0719 12:04:35.508606    4225 out.go:177] * Enabled addons: storage-provisioner
	I0719 12:04:34.697573    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:34.697832    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:04:34.725733    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:04:34.725862    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:04:34.745816    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:04:34.745899    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:04:34.759240    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:04:34.759314    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:04:34.770500    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:04:34.770569    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:04:34.781404    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:04:34.781475    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:04:34.792293    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:04:34.792354    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:04:34.802841    4100 logs.go:276] 0 containers: []
	W0719 12:04:34.802855    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:04:34.802915    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:04:34.819924    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:04:34.819942    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:04:34.819947    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:04:34.845252    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:04:34.845263    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:04:34.859701    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:04:34.859713    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:04:34.875436    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:04:34.875452    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:04:34.887729    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:04:34.887741    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:04:34.905364    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:04:34.905374    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:04:34.943344    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:04:34.943357    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:04:34.948195    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:04:34.948205    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:04:34.962462    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:04:34.962477    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:04:34.973977    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:04:34.973988    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:04:34.985692    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:04:34.985702    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:04:34.996873    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:04:34.996889    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:04:35.030482    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:04:35.030497    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:04:35.048630    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:04:35.048642    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:04:35.060429    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:04:35.060444    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:04:37.574223    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:35.517540    4225 addons.go:510] duration metric: took 30.485812541s for enable addons: enabled=[storage-provisioner]
	I0719 12:04:42.576923    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:42.577153    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:04:42.599078    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:04:42.599199    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:04:42.615328    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:04:42.615403    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:04:42.630494    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:04:42.630567    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:04:42.642065    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:04:42.642133    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:04:42.652977    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:04:42.653046    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:04:42.664797    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:04:42.664855    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:04:42.675414    4100 logs.go:276] 0 containers: []
	W0719 12:04:42.675425    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:04:42.675485    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:04:42.693094    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:04:42.693111    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:04:42.693117    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:04:42.707039    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:04:42.707050    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:04:42.742208    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:04:42.742216    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:04:42.746935    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:04:42.746940    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:04:42.758806    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:04:42.758815    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:04:42.770440    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:04:42.770450    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:04:42.789261    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:04:42.789275    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:04:42.800755    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:04:42.800765    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:04:42.814187    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:04:42.814199    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:04:42.826360    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:04:42.826370    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:04:42.838290    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:04:42.838309    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:04:42.856242    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:04:42.856257    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:04:42.881189    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:04:42.881196    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:04:42.892337    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:04:42.892347    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:04:42.928008    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:04:42.928020    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:04:40.134097    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:40.134137    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:45.440739    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:45.134337    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:45.134362    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:50.442963    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:50.443131    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:04:50.457018    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:04:50.457100    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:04:50.468389    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:04:50.468455    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:04:50.480002    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:04:50.480071    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:04:50.494006    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:04:50.494064    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:04:50.504211    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:04:50.504268    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:04:50.514899    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:04:50.514969    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:04:50.525152    4100 logs.go:276] 0 containers: []
	W0719 12:04:50.525163    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:04:50.525221    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:04:50.535775    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:04:50.535792    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:04:50.535799    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:04:50.540856    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:04:50.540862    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:04:50.551975    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:04:50.551986    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:04:50.575605    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:04:50.575613    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:04:50.592929    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:04:50.592941    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:04:50.610277    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:04:50.610291    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:04:50.622412    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:04:50.622425    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:04:50.634296    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:04:50.634307    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:04:50.649375    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:04:50.649391    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:04:50.661661    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:04:50.661675    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:04:50.672912    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:04:50.672922    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:04:50.707376    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:04:50.707384    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:04:50.742243    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:04:50.742254    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:04:50.756556    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:04:50.756566    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:04:50.770555    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:04:50.770568    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:04:50.134722    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:50.134796    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:53.284524    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:55.136118    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:55.136143    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:58.287082    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:58.287246    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:04:58.301236    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:04:58.301319    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:04:58.312717    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:04:58.312787    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:04:58.329072    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:04:58.329143    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:04:58.339857    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:04:58.339925    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:04:58.353698    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:04:58.353766    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:04:58.368506    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:04:58.368572    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:04:58.379531    4100 logs.go:276] 0 containers: []
	W0719 12:04:58.379541    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:04:58.379597    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:04:58.390088    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:04:58.390103    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:04:58.390109    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:04:58.395383    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:04:58.395392    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:04:58.432551    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:04:58.432562    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:04:58.451821    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:04:58.451832    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:04:58.463506    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:04:58.463517    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:04:58.496237    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:04:58.496244    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:04:58.507913    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:04:58.507923    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:04:58.520184    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:04:58.520194    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:04:58.535438    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:04:58.535448    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:04:58.549250    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:04:58.549259    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:04:58.561818    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:04:58.561827    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:04:58.579331    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:04:58.579342    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:04:58.603207    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:04:58.603215    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:04:58.615556    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:04:58.615566    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:04:58.627765    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:04:58.627779    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:05:01.142226    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:00.137835    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:00.137859    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:06.142797    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:06.142936    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:06.157690    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:05:06.157772    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:06.170357    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:05:06.170426    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:06.192248    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:05:06.192320    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:06.204105    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:05:06.204176    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:06.214609    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:05:06.214666    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:06.225498    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:05:06.225566    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:06.236527    4100 logs.go:276] 0 containers: []
	W0719 12:05:06.236538    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:06.236598    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:06.254543    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:05:06.254561    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:06.254566    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:06.290301    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:05:06.290321    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:05:06.302826    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:06.302837    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:06.327105    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:05:06.327114    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:06.338211    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:05:06.338224    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:05:06.351767    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:05:06.351780    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:05:06.363627    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:05:06.363637    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:05:06.381375    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:05:06.381388    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:05:06.393087    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:05:06.393096    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:05:06.408334    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:06.408344    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:06.412764    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:05:06.412770    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:05:06.427109    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:05:06.427122    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:05:06.443488    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:06.443499    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:06.478459    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:05:06.478470    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:05:06.492872    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:05:06.492883    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:05:05.138348    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:05.138758    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:05.159786    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:05:05.159867    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:05.176585    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:05:05.176650    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:05.187648    4225 logs.go:276] 2 containers: [2de5f4a7e171 62c9cee5b160]
	I0719 12:05:05.187722    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:05.199368    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:05:05.199437    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:05.210160    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:05:05.210229    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:05.221554    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:05:05.221621    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:05.234123    4225 logs.go:276] 0 containers: []
	W0719 12:05:05.234133    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:05.234193    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:05.244832    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:05:05.244849    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:05.244854    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:05.249419    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:05.249426    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:05.284276    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:05:05.284290    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:05:05.298669    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:05:05.298683    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:05:05.310988    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:05:05.310999    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:05:05.327066    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:05:05.327079    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:05.339410    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:05.339421    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:05.376860    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:05:05.376876    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:05:05.393172    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:05:05.393183    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:05:05.404674    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:05:05.404686    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:05:05.417705    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:05:05.417717    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:05:05.435612    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:05:05.435622    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:05:05.447993    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:05.448002    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:07.974532    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:09.006831    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:12.977109    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:12.977278    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:12.995929    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:05:12.996023    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:13.009052    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:05:13.009129    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:13.020149    4225 logs.go:276] 2 containers: [2de5f4a7e171 62c9cee5b160]
	I0719 12:05:13.020218    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:13.030994    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:05:13.031058    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:13.041674    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:05:13.041738    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:13.052377    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:05:13.052443    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:13.063049    4225 logs.go:276] 0 containers: []
	W0719 12:05:13.063060    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:13.063117    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:13.073511    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:05:13.073524    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:05:13.073530    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:13.084674    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:05:13.084689    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:05:13.098837    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:05:13.098847    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:05:13.109827    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:05:13.109841    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:05:13.125416    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:05:13.125428    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:05:13.137331    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:05:13.137341    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:05:13.158611    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:05:13.158622    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:05:13.169842    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:13.169856    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:13.194693    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:13.194704    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:13.233418    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:13.233435    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:13.237920    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:13.237928    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:13.273271    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:05:13.273286    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:05:13.288285    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:05:13.288295    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:05:14.009170    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:14.009392    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:14.038919    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:05:14.039033    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:14.058625    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:05:14.058697    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:14.072176    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:05:14.072255    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:14.083735    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:05:14.083801    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:14.094205    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:05:14.094270    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:14.109529    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:05:14.109592    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:14.119588    4100 logs.go:276] 0 containers: []
	W0719 12:05:14.119608    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:14.119667    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:14.130631    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:05:14.130648    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:05:14.130653    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:05:14.142995    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:14.143004    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:14.167983    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:05:14.167990    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:05:14.180166    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:14.180177    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:14.184804    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:05:14.184814    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:05:14.196818    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:05:14.196830    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:05:14.208695    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:05:14.208708    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:05:14.226176    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:05:14.226186    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:14.245004    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:14.245014    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:14.280419    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:05:14.280428    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:05:14.292083    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:14.292094    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:14.328195    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:05:14.328209    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:05:14.345893    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:05:14.345905    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:05:14.361978    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:05:14.361990    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:05:14.378904    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:05:14.378915    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:05:16.897658    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:15.801988    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:21.899874    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:21.900039    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:21.917328    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:05:21.917418    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:21.931366    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:05:21.931437    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:21.943177    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:05:21.943246    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:21.954193    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:05:21.954265    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:21.964932    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:05:21.965001    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:21.975618    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:05:21.975686    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:21.985947    4100 logs.go:276] 0 containers: []
	W0719 12:05:21.985959    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:21.986018    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:21.996637    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:05:21.996651    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:21.996656    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:22.031278    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:05:22.031290    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:05:22.044989    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:05:22.044999    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:22.057504    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:05:22.057518    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:05:22.073601    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:05:22.073614    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:05:22.084974    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:22.084984    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:22.120272    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:05:22.120283    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:05:22.132684    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:05:22.132694    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:05:22.144860    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:05:22.144871    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:05:22.162898    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:22.162909    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:22.186215    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:22.186222    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:22.190846    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:05:22.190852    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:05:22.205240    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:05:22.205255    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:05:22.216494    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:05:22.216504    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:05:22.232121    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:05:22.232131    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:05:20.803450    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:20.803704    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:20.829663    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:05:20.829763    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:20.848257    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:05:20.848335    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:20.862198    4225 logs.go:276] 2 containers: [2de5f4a7e171 62c9cee5b160]
	I0719 12:05:20.862263    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:20.873423    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:05:20.873489    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:20.884018    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:05:20.884077    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:20.895141    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:05:20.895207    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:20.910669    4225 logs.go:276] 0 containers: []
	W0719 12:05:20.910682    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:20.910735    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:20.921558    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:05:20.921571    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:05:20.921577    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:05:20.935967    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:05:20.935977    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:05:20.947830    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:05:20.947841    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:05:20.959883    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:05:20.959894    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:05:20.971252    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:05:20.971288    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:20.983211    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:20.983222    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:21.019177    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:21.019188    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:21.024161    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:05:21.024168    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:05:21.042869    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:05:21.042884    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:05:21.054646    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:05:21.054655    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:05:21.070240    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:05:21.070250    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:05:21.091840    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:21.091851    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:21.117563    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:21.117574    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:23.658548    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:24.745871    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:28.660787    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:28.660936    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:28.672624    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:05:28.672700    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:28.683561    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:05:28.683631    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:28.694045    4225 logs.go:276] 2 containers: [2de5f4a7e171 62c9cee5b160]
	I0719 12:05:28.694109    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:28.704223    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:05:28.704286    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:28.715747    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:05:28.715810    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:28.726231    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:05:28.726297    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:28.736184    4225 logs.go:276] 0 containers: []
	W0719 12:05:28.736196    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:28.736248    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:28.746646    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:05:28.746663    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:05:28.746669    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:05:28.760609    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:05:28.760620    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:05:28.774872    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:05:28.774884    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:05:28.786320    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:05:28.786330    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:05:28.797957    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:05:28.797967    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:05:28.815542    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:05:28.815551    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:05:28.827239    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:28.827252    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:28.866879    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:28.866898    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:28.871585    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:05:28.871593    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:28.882592    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:05:28.882602    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:05:28.898507    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:28.898517    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:28.922476    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:28.922485    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:28.961228    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:05:28.961239    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:05:29.748214    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:29.748332    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:29.759869    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:05:29.759939    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:29.770920    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:05:29.770986    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:29.781900    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:05:29.781971    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:29.792738    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:05:29.792804    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:29.803294    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:05:29.803355    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:29.813873    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:05:29.813938    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:29.828693    4100 logs.go:276] 0 containers: []
	W0719 12:05:29.828706    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:29.828757    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:29.839178    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:05:29.839195    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:05:29.839200    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:05:29.854860    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:05:29.854874    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:29.866731    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:05:29.866744    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:05:29.882025    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:05:29.882036    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:05:29.900227    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:29.900238    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:29.935894    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:29.935916    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:29.971544    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:05:29.971559    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:05:29.983666    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:05:29.983680    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:05:29.995354    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:05:29.995364    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:05:30.011299    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:30.011310    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:30.036479    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:05:30.036486    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:05:30.056797    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:05:30.056808    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:05:30.070592    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:05:30.070604    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:05:30.084588    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:30.084598    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:30.089148    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:05:30.089156    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:05:32.602687    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:31.477037    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:37.604905    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:37.605018    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:37.618320    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:05:37.618390    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:37.629884    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:05:37.629952    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:37.641251    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:05:37.641328    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:37.652558    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:05:37.652622    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:37.663332    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:05:37.663403    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:37.674255    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:05:37.674332    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:37.683614    4100 logs.go:276] 0 containers: []
	W0719 12:05:37.683624    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:37.683672    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:37.694143    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:05:37.694161    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:05:37.694167    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:05:37.711668    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:05:37.711680    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:05:37.723866    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:37.723877    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:37.748407    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:05:37.748414    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:05:37.763001    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:05:37.763014    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:05:37.777497    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:05:37.777506    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:05:37.789192    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:05:37.789203    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:05:37.800809    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:05:37.800824    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:05:37.812467    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:37.812477    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:37.817137    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:05:37.817143    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:37.828761    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:37.828771    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:37.864513    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:37.864527    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:37.900314    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:05:37.900325    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:05:37.912524    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:05:37.912535    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:05:37.934295    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:05:37.934305    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:05:36.479274    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:36.479386    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:36.490786    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:05:36.490863    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:36.501874    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:05:36.501942    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:36.512585    4225 logs.go:276] 2 containers: [2de5f4a7e171 62c9cee5b160]
	I0719 12:05:36.512650    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:36.523248    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:05:36.523317    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:36.533971    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:05:36.534046    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:36.544909    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:05:36.544970    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:36.556192    4225 logs.go:276] 0 containers: []
	W0719 12:05:36.556203    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:36.556257    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:36.567598    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:05:36.567611    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:05:36.567617    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:05:36.581377    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:05:36.581388    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:05:36.601550    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:05:36.601560    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:05:36.613711    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:05:36.613723    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:05:36.633617    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:05:36.633627    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:05:36.645725    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:36.645736    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:36.670648    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:36.670658    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:36.708263    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:36.708275    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:36.713014    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:05:36.713022    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:36.725226    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:05:36.725238    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:05:36.737162    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:05:36.737173    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:05:36.748937    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:36.748950    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:36.783093    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:05:36.783107    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:05:39.299472    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:40.451675    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:44.301803    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:44.302182    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:44.331879    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:05:44.332008    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:44.349822    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:05:44.349904    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:44.363821    4225 logs.go:276] 2 containers: [2de5f4a7e171 62c9cee5b160]
	I0719 12:05:44.363921    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:44.375144    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:05:44.375216    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:44.386600    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:05:44.386664    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:44.402359    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:05:44.402427    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:44.412592    4225 logs.go:276] 0 containers: []
	W0719 12:05:44.412604    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:44.412665    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:44.423539    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:05:44.423554    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:05:44.423559    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:05:44.434992    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:05:44.435001    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:05:44.452158    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:05:44.452169    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:05:44.463894    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:05:44.463906    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:05:44.475827    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:05:44.475839    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:44.487654    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:44.487666    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:44.523696    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:44.523704    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:44.528291    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:05:44.528301    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:05:44.539996    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:05:44.540007    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:05:44.557944    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:44.557954    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:44.581293    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:44.581302    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:44.616060    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:05:44.616072    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:05:44.633734    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:05:44.633743    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:05:45.453968    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:45.454121    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:45.491890    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:05:45.491968    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:45.509947    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:05:45.510033    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:45.523393    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:05:45.523466    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:45.540026    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:05:45.540086    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:45.550659    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:05:45.550726    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:45.561433    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:05:45.561489    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:45.572051    4100 logs.go:276] 0 containers: []
	W0719 12:05:45.572061    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:45.572109    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:45.582476    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:05:45.582493    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:05:45.582500    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:05:45.602036    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:05:45.602047    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:05:45.613555    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:05:45.613566    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:05:45.627337    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:05:45.627350    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:05:45.642880    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:05:45.642892    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:05:45.660911    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:05:45.660923    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:05:45.672212    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:45.672223    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:45.694942    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:45.694951    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:45.728163    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:45.728171    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:45.732504    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:05:45.732513    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:05:45.744665    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:05:45.744674    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:05:45.759576    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:45.759587    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:45.796935    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:05:45.796947    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:05:45.819224    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:05:45.819235    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:05:45.831740    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:05:45.831752    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:47.149504    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:48.347758    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:52.151729    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:52.151918    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:52.169064    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:05:52.169149    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:52.181699    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:05:52.181766    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:52.192870    4225 logs.go:276] 2 containers: [2de5f4a7e171 62c9cee5b160]
	I0719 12:05:52.192933    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:52.203472    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:05:52.203539    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:52.222085    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:05:52.222156    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:52.232596    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:05:52.232665    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:52.243222    4225 logs.go:276] 0 containers: []
	W0719 12:05:52.243232    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:52.243287    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:52.254975    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:05:52.254991    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:52.254997    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:52.292096    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:05:52.292105    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:05:52.307230    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:05:52.307240    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:05:52.324609    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:52.324619    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:52.348550    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:05:52.348557    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:52.359924    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:52.359934    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:52.364733    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:52.364739    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:52.400458    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:05:52.400472    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:05:52.414916    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:05:52.414928    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:05:52.430264    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:05:52.430274    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:05:52.442574    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:05:52.442586    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:05:52.455036    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:05:52.455046    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:05:52.468295    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:05:52.468306    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:05:53.349842    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:53.350056    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:53.372422    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:05:53.372504    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:53.384921    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:05:53.384992    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:53.395965    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:05:53.396034    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:53.407226    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:05:53.407286    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:53.417474    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:05:53.417540    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:53.427506    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:05:53.427577    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:53.437746    4100 logs.go:276] 0 containers: []
	W0719 12:05:53.437759    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:53.437817    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:53.449790    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:05:53.449807    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:53.449812    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:53.485314    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:05:53.485339    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:05:53.497912    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:05:53.497927    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:05:53.509405    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:05:53.509416    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:05:53.524299    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:05:53.524313    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:05:53.536014    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:53.536032    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:53.540821    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:05:53.540839    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:05:53.554955    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:53.554964    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:53.579363    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:05:53.579371    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:05:53.594351    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:05:53.594361    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:05:53.612148    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:05:53.612165    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:53.623966    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:53.623979    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:53.659954    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:05:53.659966    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:05:53.674249    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:05:53.674260    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:05:53.685868    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:05:53.685879    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:05:56.199442    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:54.982082    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:06:01.201830    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:06:01.202062    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:06:01.222678    4100 logs.go:276] 1 containers: [58fb6fbf1253]
	I0719 12:06:01.222771    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:06:01.238133    4100 logs.go:276] 1 containers: [bc70bc5aa0a5]
	I0719 12:06:01.238213    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:06:01.250729    4100 logs.go:276] 4 containers: [0b71b4e60b99 bf1c0e067728 15a4c09ca72b 18f4d73ea1b1]
	I0719 12:06:01.250801    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:06:01.261292    4100 logs.go:276] 1 containers: [f76b0584a01a]
	I0719 12:06:01.261354    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:06:01.272411    4100 logs.go:276] 1 containers: [23bc11f870b5]
	I0719 12:06:01.272475    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:06:01.286501    4100 logs.go:276] 1 containers: [d4068c24c072]
	I0719 12:06:01.286570    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:06:01.298420    4100 logs.go:276] 0 containers: []
	W0719 12:06:01.298430    4100 logs.go:278] No container was found matching "kindnet"
	I0719 12:06:01.298484    4100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:06:01.309087    4100 logs.go:276] 1 containers: [0e2e0fca042e]
	I0719 12:06:01.309103    4100 logs.go:123] Gathering logs for dmesg ...
	I0719 12:06:01.309110    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:06:01.314158    4100 logs.go:123] Gathering logs for kube-apiserver [58fb6fbf1253] ...
	I0719 12:06:01.314165    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58fb6fbf1253"
	I0719 12:06:01.329089    4100 logs.go:123] Gathering logs for coredns [0b71b4e60b99] ...
	I0719 12:06:01.329102    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b71b4e60b99"
	I0719 12:06:01.340941    4100 logs.go:123] Gathering logs for coredns [bf1c0e067728] ...
	I0719 12:06:01.340953    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf1c0e067728"
	I0719 12:06:01.355028    4100 logs.go:123] Gathering logs for kube-proxy [23bc11f870b5] ...
	I0719 12:06:01.355040    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bc11f870b5"
	I0719 12:06:01.367233    4100 logs.go:123] Gathering logs for kube-controller-manager [d4068c24c072] ...
	I0719 12:06:01.367246    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4068c24c072"
	I0719 12:06:01.385029    4100 logs.go:123] Gathering logs for coredns [15a4c09ca72b] ...
	I0719 12:06:01.385039    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15a4c09ca72b"
	I0719 12:06:01.396733    4100 logs.go:123] Gathering logs for coredns [18f4d73ea1b1] ...
	I0719 12:06:01.396750    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18f4d73ea1b1"
	I0719 12:06:01.408349    4100 logs.go:123] Gathering logs for storage-provisioner [0e2e0fca042e] ...
	I0719 12:06:01.408360    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2e0fca042e"
	I0719 12:06:01.420211    4100 logs.go:123] Gathering logs for container status ...
	I0719 12:06:01.420224    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:06:01.432498    4100 logs.go:123] Gathering logs for kubelet ...
	I0719 12:06:01.432509    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:06:01.465779    4100 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:06:01.465787    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:06:01.501514    4100 logs.go:123] Gathering logs for etcd [bc70bc5aa0a5] ...
	I0719 12:06:01.501527    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc70bc5aa0a5"
	I0719 12:06:01.516640    4100 logs.go:123] Gathering logs for kube-scheduler [f76b0584a01a] ...
	I0719 12:06:01.516649    4100 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76b0584a01a"
	I0719 12:06:01.532878    4100 logs.go:123] Gathering logs for Docker ...
	I0719 12:06:01.532889    4100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:59.984364    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:59.984609    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:06:00.006396    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:06:00.006482    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:06:00.021238    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:06:00.021307    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:06:00.033698    4225 logs.go:276] 2 containers: [2de5f4a7e171 62c9cee5b160]
	I0719 12:06:00.033775    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:06:00.045832    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:06:00.045902    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:06:00.056454    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:06:00.056523    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:06:00.067570    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:06:00.067641    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:06:00.078718    4225 logs.go:276] 0 containers: []
	W0719 12:06:00.078729    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:06:00.078783    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:06:00.088936    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:06:00.088948    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:06:00.088953    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:06:00.103599    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:06:00.103610    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:06:00.115517    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:06:00.115529    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:06:00.132800    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:06:00.132810    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:06:00.144961    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:06:00.144970    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:06:00.170331    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:06:00.170347    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:06:00.208197    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:06:00.208206    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:06:00.222765    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:06:00.222781    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:06:00.234700    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:06:00.234711    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:06:00.249779    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:06:00.249789    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:06:00.262774    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:06:00.262795    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:06:00.274373    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:06:00.274383    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:06:00.278582    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:06:00.278591    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:06:02.818658    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:06:04.059477    4100 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:06:09.061759    4100 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:06:09.066404    4100 out.go:177] 
	W0719 12:06:09.070203    4100 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0719 12:06:09.070214    4100 out.go:239] * 
	W0719 12:06:09.070982    4100 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:06:09.082224    4100 out.go:177] 
	I0719 12:06:07.820961    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:06:07.821137    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:06:07.837000    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:06:07.837072    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:06:07.861185    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:06:07.861260    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:06:07.871817    4225 logs.go:276] 4 containers: [7825519f461e c0cc709dc995 2de5f4a7e171 62c9cee5b160]
	I0719 12:06:07.871895    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:06:07.882287    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:06:07.882356    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:06:07.892854    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:06:07.892917    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:06:07.903082    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:06:07.903149    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:06:07.914397    4225 logs.go:276] 0 containers: []
	W0719 12:06:07.914406    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:06:07.914456    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:06:07.925151    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:06:07.925168    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:06:07.925175    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:06:07.940522    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:06:07.940537    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:06:07.954331    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:06:07.954341    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:06:07.971197    4225 logs.go:123] Gathering logs for coredns [7825519f461e] ...
	I0719 12:06:07.971207    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7825519f461e"
	I0719 12:06:07.983189    4225 logs.go:123] Gathering logs for coredns [c0cc709dc995] ...
	I0719 12:06:07.983202    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cc709dc995"
	I0719 12:06:07.994853    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:06:07.994865    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:06:08.009094    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:06:08.009105    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:06:08.020892    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:06:08.020906    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:06:08.035370    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:06:08.035382    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:06:08.040232    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:06:08.040239    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:06:08.055464    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:06:08.055475    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:06:08.080411    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:06:08.080418    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:06:08.115647    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:06:08.115659    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:06:08.127371    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:06:08.127382    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:06:08.164661    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:06:08.164671    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:06:10.678031    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:06:15.680210    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:06:15.680417    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:06:15.708486    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:06:15.708589    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:06:15.725400    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:06:15.725478    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:06:15.738391    4225 logs.go:276] 4 containers: [7825519f461e c0cc709dc995 2de5f4a7e171 62c9cee5b160]
	I0719 12:06:15.738474    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:06:15.749212    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:06:15.749278    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:06:15.759827    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:06:15.759890    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:06:15.770601    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:06:15.770665    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:06:15.782084    4225 logs.go:276] 0 containers: []
	W0719 12:06:15.782095    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:06:15.782155    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:06:15.792370    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:06:15.792387    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:06:15.792392    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:06:15.818188    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:06:15.818199    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:06:15.831670    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:06:15.831681    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:06:15.850178    4225 logs.go:123] Gathering logs for coredns [7825519f461e] ...
	I0719 12:06:15.850192    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7825519f461e"
	I0719 12:06:15.861395    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:06:15.861407    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:06:15.877166    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:06:15.877177    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:06:15.890047    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:06:15.890059    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:06:15.894803    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:06:15.894810    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:06:15.908619    4225 logs.go:123] Gathering logs for coredns [c0cc709dc995] ...
	I0719 12:06:15.908629    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cc709dc995"
	I0719 12:06:15.920197    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:06:15.920211    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:06:15.937395    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:06:15.937414    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:06:15.971826    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:06:15.971837    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:06:15.986369    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:06:15.986383    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:06:15.998029    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:06:15.998043    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:06:16.009312    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:06:16.009324    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:06:18.548010    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:06:23.549116    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:06:23.549328    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:06:23.568386    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:06:23.568478    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:06:23.582442    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:06:23.582515    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:06:23.597995    4225 logs.go:276] 4 containers: [7825519f461e c0cc709dc995 2de5f4a7e171 62c9cee5b160]
	I0719 12:06:23.598070    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:06:23.608368    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:06:23.608434    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:06:23.619232    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:06:23.619295    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:06:23.631490    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:06:23.631559    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:06:23.641758    4225 logs.go:276] 0 containers: []
	W0719 12:06:23.641767    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:06:23.641816    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:06:23.652031    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:06:23.652049    4225 logs.go:123] Gathering logs for coredns [c0cc709dc995] ...
	I0719 12:06:23.652055    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cc709dc995"
	I0719 12:06:23.664551    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:06:23.664562    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:06:23.678580    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:06:23.678592    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:06:23.690347    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:06:23.690357    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:06:23.701965    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:06:23.701975    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:06:23.713706    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:06:23.713716    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:06:23.726855    4225 logs.go:123] Gathering logs for coredns [7825519f461e] ...
	I0719 12:06:23.726867    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7825519f461e"
	I0719 12:06:23.745800    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:06:23.745811    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:06:23.761179    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:06:23.761190    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:06:23.779962    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:06:23.779976    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:06:23.819169    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:06:23.819183    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:06:23.833181    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:06:23.833191    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:06:23.858522    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:06:23.858532    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:06:23.873181    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:06:23.873190    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:06:23.908716    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:06:23.908732    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	
	
	==> Docker <==
	-- Journal begins at Fri 2024-07-19 18:55:49 UTC, ends at Fri 2024-07-19 19:06:25 UTC. --
	Jul 19 19:06:09 running-upgrade-589000 dockerd[3246]: time="2024-07-19T19:06:09.927694979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 19:06:09 running-upgrade-589000 dockerd[3246]: time="2024-07-19T19:06:09.927738145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 19:06:09 running-upgrade-589000 dockerd[3246]: time="2024-07-19T19:06:09.927763228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 19:06:09 running-upgrade-589000 dockerd[3246]: time="2024-07-19T19:06:09.927893933Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/7213c062ea3ef110606fbb93e8b91b277844e9a4dc787246dee2917848299a1d pid=18612 runtime=io.containerd.runc.v2
	Jul 19 19:06:10 running-upgrade-589000 cri-dockerd[3088]: time="2024-07-19T19:06:10Z" level=error msg="ContainerStats resp: {0x40004fca80 linux}"
	Jul 19 19:06:11 running-upgrade-589000 cri-dockerd[3088]: time="2024-07-19T19:06:11Z" level=error msg="ContainerStats resp: {0x4000742780 linux}"
	Jul 19 19:06:11 running-upgrade-589000 cri-dockerd[3088]: time="2024-07-19T19:06:11Z" level=error msg="ContainerStats resp: {0x4000629880 linux}"
	Jul 19 19:06:11 running-upgrade-589000 cri-dockerd[3088]: time="2024-07-19T19:06:11Z" level=error msg="ContainerStats resp: {0x4000742cc0 linux}"
	Jul 19 19:06:11 running-upgrade-589000 cri-dockerd[3088]: time="2024-07-19T19:06:11Z" level=error msg="ContainerStats resp: {0x4000860000 linux}"
	Jul 19 19:06:11 running-upgrade-589000 cri-dockerd[3088]: time="2024-07-19T19:06:11Z" level=error msg="ContainerStats resp: {0x4000743780 linux}"
	Jul 19 19:06:11 running-upgrade-589000 cri-dockerd[3088]: time="2024-07-19T19:06:11Z" level=error msg="ContainerStats resp: {0x4000743840 linux}"
	Jul 19 19:06:11 running-upgrade-589000 cri-dockerd[3088]: time="2024-07-19T19:06:11Z" level=error msg="ContainerStats resp: {0x4000860b40 linux}"
	Jul 19 19:06:11 running-upgrade-589000 cri-dockerd[3088]: time="2024-07-19T19:06:11Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 19 19:06:16 running-upgrade-589000 cri-dockerd[3088]: time="2024-07-19T19:06:16Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 19 19:06:21 running-upgrade-589000 cri-dockerd[3088]: time="2024-07-19T19:06:21Z" level=error msg="ContainerStats resp: {0x400078cb80 linux}"
	Jul 19 19:06:21 running-upgrade-589000 cri-dockerd[3088]: time="2024-07-19T19:06:21Z" level=error msg="ContainerStats resp: {0x40006c3900 linux}"
	Jul 19 19:06:21 running-upgrade-589000 cri-dockerd[3088]: time="2024-07-19T19:06:21Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 19 19:06:22 running-upgrade-589000 cri-dockerd[3088]: time="2024-07-19T19:06:22Z" level=error msg="ContainerStats resp: {0x40006c37c0 linux}"
	Jul 19 19:06:23 running-upgrade-589000 cri-dockerd[3088]: time="2024-07-19T19:06:23Z" level=error msg="ContainerStats resp: {0x400098be40 linux}"
	Jul 19 19:06:23 running-upgrade-589000 cri-dockerd[3088]: time="2024-07-19T19:06:23Z" level=error msg="ContainerStats resp: {0x4000728f00 linux}"
	Jul 19 19:06:23 running-upgrade-589000 cri-dockerd[3088]: time="2024-07-19T19:06:23Z" level=error msg="ContainerStats resp: {0x40006280c0 linux}"
	Jul 19 19:06:23 running-upgrade-589000 cri-dockerd[3088]: time="2024-07-19T19:06:23Z" level=error msg="ContainerStats resp: {0x4000729a80 linux}"
	Jul 19 19:06:23 running-upgrade-589000 cri-dockerd[3088]: time="2024-07-19T19:06:23Z" level=error msg="ContainerStats resp: {0x4000729ec0 linux}"
	Jul 19 19:06:23 running-upgrade-589000 cri-dockerd[3088]: time="2024-07-19T19:06:23Z" level=error msg="ContainerStats resp: {0x4000629040 linux}"
	Jul 19 19:06:23 running-upgrade-589000 cri-dockerd[3088]: time="2024-07-19T19:06:23Z" level=error msg="ContainerStats resp: {0x4000629440 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	7213c062ea3ef       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   5a338152626bd
	060486f7ce215       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   c56f5c529b4f6
	0b71b4e60b994       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   5a338152626bd
	bf1c0e0677286       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   c56f5c529b4f6
	23bc11f870b5a       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   078eefd40d791
	0e2e0fca042e8       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   29837d7a4c17f
	bc70bc5aa0a53       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   99af4b6690ef0
	d4068c24c0724       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   4021c80136ce4
	f76b0584a01a6       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   d692c4459a536
	58fb6fbf12532       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   c35c4dcad77a6
	
	
	==> coredns [060486f7ce21] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2657940115744261141.8602046367459833665. HINFO: read udp 10.244.0.2:41233->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2657940115744261141.8602046367459833665. HINFO: read udp 10.244.0.2:35026->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2657940115744261141.8602046367459833665. HINFO: read udp 10.244.0.2:40944->10.0.2.3:53: i/o timeout
	
	
	==> coredns [0b71b4e60b99] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8314825028601408071.2309453040880582003. HINFO: read udp 10.244.0.3:32787->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8314825028601408071.2309453040880582003. HINFO: read udp 10.244.0.3:41390->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8314825028601408071.2309453040880582003. HINFO: read udp 10.244.0.3:33774->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8314825028601408071.2309453040880582003. HINFO: read udp 10.244.0.3:56063->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8314825028601408071.2309453040880582003. HINFO: read udp 10.244.0.3:43923->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8314825028601408071.2309453040880582003. HINFO: read udp 10.244.0.3:53555->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8314825028601408071.2309453040880582003. HINFO: read udp 10.244.0.3:53839->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8314825028601408071.2309453040880582003. HINFO: read udp 10.244.0.3:35710->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8314825028601408071.2309453040880582003. HINFO: read udp 10.244.0.3:53577->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8314825028601408071.2309453040880582003. HINFO: read udp 10.244.0.3:40341->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7213c062ea3e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 140562105617316182.4369048427279324432. HINFO: read udp 10.244.0.3:34699->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 140562105617316182.4369048427279324432. HINFO: read udp 10.244.0.3:56888->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 140562105617316182.4369048427279324432. HINFO: read udp 10.244.0.3:47031->10.0.2.3:53: i/o timeout
	
	
	==> coredns [bf1c0e067728] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 953179740238106734.5128405024801507101. HINFO: read udp 10.244.0.2:54697->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 953179740238106734.5128405024801507101. HINFO: read udp 10.244.0.2:50185->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 953179740238106734.5128405024801507101. HINFO: read udp 10.244.0.2:42033->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 953179740238106734.5128405024801507101. HINFO: read udp 10.244.0.2:57176->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 953179740238106734.5128405024801507101. HINFO: read udp 10.244.0.2:45748->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 953179740238106734.5128405024801507101. HINFO: read udp 10.244.0.2:34494->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 953179740238106734.5128405024801507101. HINFO: read udp 10.244.0.2:41250->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 953179740238106734.5128405024801507101. HINFO: read udp 10.244.0.2:34142->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 953179740238106734.5128405024801507101. HINFO: read udp 10.244.0.2:58142->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 953179740238106734.5128405024801507101. HINFO: read udp 10.244.0.2:53447->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-589000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-589000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221
	                    minikube.k8s.io/name=running-upgrade-589000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T12_02_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 19:02:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-589000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 19:06:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 19:02:08 +0000   Fri, 19 Jul 2024 19:02:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 19:02:08 +0000   Fri, 19 Jul 2024 19:02:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 19:02:08 +0000   Fri, 19 Jul 2024 19:02:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 19:02:08 +0000   Fri, 19 Jul 2024 19:02:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-589000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 92d54d35a5ed4d9c8115269b0230a136
	  System UUID:                92d54d35a5ed4d9c8115269b0230a136
	  Boot ID:                    56efcaec-05b9-4427-b18f-140e6be778ab
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-4k7xn                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-fn56c                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-589000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-589000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-589000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-bmphc                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-589000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m2s   kube-proxy       
	  Normal  Starting                 4m18s  kubelet          Starting kubelet.
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-589000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-589000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-589000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-589000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-589000 event: Registered Node running-upgrade-589000 in Controller
	
	
	==> dmesg <==
	[  +0.075297] systemd-fstab-generator[1162]: Ignoring "noauto" for root device
	[  +0.080441] systemd-fstab-generator[1173]: Ignoring "noauto" for root device
	[  +1.145231] kauditd_printk_skb: 92 callbacks suppressed
	[  +0.089104] systemd-fstab-generator[1323]: Ignoring "noauto" for root device
	[  +0.076550] systemd-fstab-generator[1334]: Ignoring "noauto" for root device
	[ +15.141829] systemd-fstab-generator[1610]: Ignoring "noauto" for root device
	[  +0.311771] kauditd_printk_skb: 29 callbacks suppressed
	[ +23.823123] systemd-fstab-generator[2328]: Ignoring "noauto" for root device
	[  +2.445295] systemd-fstab-generator[2607]: Ignoring "noauto" for root device
	[  +0.150327] systemd-fstab-generator[2646]: Ignoring "noauto" for root device
	[  +0.101657] systemd-fstab-generator[2657]: Ignoring "noauto" for root device
	[  +0.092511] systemd-fstab-generator[2670]: Ignoring "noauto" for root device
	[  +1.570858] kauditd_printk_skb: 8 callbacks suppressed
	[  +0.123480] systemd-fstab-generator[3045]: Ignoring "noauto" for root device
	[  +0.068928] systemd-fstab-generator[3056]: Ignoring "noauto" for root device
	[  +0.086928] systemd-fstab-generator[3067]: Ignoring "noauto" for root device
	[  +0.096387] systemd-fstab-generator[3081]: Ignoring "noauto" for root device
	[  +2.270318] systemd-fstab-generator[3233]: Ignoring "noauto" for root device
	[  +3.062189] systemd-fstab-generator[3619]: Ignoring "noauto" for root device
	[  +1.085431] systemd-fstab-generator[3888]: Ignoring "noauto" for root device
	[Jul19 18:58] kauditd_printk_skb: 68 callbacks suppressed
	[Jul19 19:01] kauditd_printk_skb: 23 callbacks suppressed
	[Jul19 19:02] systemd-fstab-generator[11654]: Ignoring "noauto" for root device
	[  +5.638437] systemd-fstab-generator[12262]: Ignoring "noauto" for root device
	[  +0.503993] systemd-fstab-generator[12392]: Ignoring "noauto" for root device
	
	
	==> etcd [bc70bc5aa0a5] <==
	{"level":"info","ts":"2024-07-19T19:02:03.531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-19T19:02:03.531Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-19T19:02:03.557Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T19:02:03.558Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T19:02:03.558Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T19:02:03.558Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-19T19:02:03.558Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-19T19:02:04.211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-19T19:02:04.211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-19T19:02:04.211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-19T19:02:04.211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-19T19:02:04.211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-19T19:02:04.211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-19T19:02:04.211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-19T19:02:04.211Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-589000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T19:02:04.211Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:02:04.211Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T19:02:04.212Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T19:02:04.212Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T19:02:04.213Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-07-19T19:02:04.213Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T19:02:04.213Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T19:02:04.216Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:02:04.216Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T19:02:04.216Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 19:06:25 up 10 min,  0 users,  load average: 0.58, 0.49, 0.27
	Linux running-upgrade-589000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [58fb6fbf1253] <==
	I0719 19:02:05.453928       1 controller.go:611] quota admission added evaluator for: namespaces
	I0719 19:02:05.495865       1 cache.go:39] Caches are synced for autoregister controller
	I0719 19:02:05.496091       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 19:02:05.496265       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0719 19:02:05.496904       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0719 19:02:05.496912       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 19:02:05.501349       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0719 19:02:06.226665       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0719 19:02:06.398693       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0719 19:02:06.400422       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0719 19:02:06.400432       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 19:02:06.534851       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 19:02:06.547221       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 19:02:06.658891       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0719 19:02:06.660864       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0719 19:02:06.661250       1 controller.go:611] quota admission added evaluator for: endpoints
	I0719 19:02:06.662808       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 19:02:07.527529       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0719 19:02:07.935018       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0719 19:02:07.938061       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0719 19:02:07.953424       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0719 19:02:07.992878       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 19:02:21.648104       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0719 19:02:21.698193       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0719 19:02:22.727687       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [d4068c24c072] <==
	I0719 19:02:20.799466       1 shared_informer.go:262] Caches are synced for TTL
	I0719 19:02:20.805385       1 shared_informer.go:262] Caches are synced for PV protection
	I0719 19:02:20.806740       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0719 19:02:20.806773       1 shared_informer.go:262] Caches are synced for namespace
	I0719 19:02:20.829039       1 shared_informer.go:262] Caches are synced for taint
	I0719 19:02:20.829086       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0719 19:02:20.829308       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0719 19:02:20.829537       1 event.go:294] "Event occurred" object="running-upgrade-589000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-589000 event: Registered Node running-upgrade-589000 in Controller"
	W0719 19:02:20.829785       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-589000. Assuming now as a timestamp.
	I0719 19:02:20.829816       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0719 19:02:20.839937       1 shared_informer.go:262] Caches are synced for daemon sets
	I0719 19:02:20.842127       1 shared_informer.go:262] Caches are synced for cronjob
	I0719 19:02:20.896196       1 shared_informer.go:262] Caches are synced for HPA
	I0719 19:02:20.983476       1 shared_informer.go:262] Caches are synced for deployment
	I0719 19:02:20.996833       1 shared_informer.go:262] Caches are synced for resource quota
	I0719 19:02:20.997952       1 shared_informer.go:262] Caches are synced for disruption
	I0719 19:02:20.997996       1 disruption.go:371] Sending events to api server.
	I0719 19:02:21.007065       1 shared_informer.go:262] Caches are synced for resource quota
	I0719 19:02:21.421122       1 shared_informer.go:262] Caches are synced for garbage collector
	I0719 19:02:21.497022       1 shared_informer.go:262] Caches are synced for garbage collector
	I0719 19:02:21.497034       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0719 19:02:21.652070       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bmphc"
	I0719 19:02:21.699246       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0719 19:02:21.799775       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-4k7xn"
	I0719 19:02:21.802397       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-fn56c"
	
	
	==> kube-proxy [23bc11f870b5] <==
	I0719 19:02:22.717743       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0719 19:02:22.717767       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0719 19:02:22.717778       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0719 19:02:22.725967       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0719 19:02:22.725977       1 server_others.go:206] "Using iptables Proxier"
	I0719 19:02:22.726028       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0719 19:02:22.726148       1 server.go:661] "Version info" version="v1.24.1"
	I0719 19:02:22.726158       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 19:02:22.726451       1 config.go:317] "Starting service config controller"
	I0719 19:02:22.726465       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0719 19:02:22.726473       1 config.go:226] "Starting endpoint slice config controller"
	I0719 19:02:22.726475       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0719 19:02:22.726715       1 config.go:444] "Starting node config controller"
	I0719 19:02:22.726717       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0719 19:02:22.827106       1 shared_informer.go:262] Caches are synced for service config
	I0719 19:02:22.827126       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0719 19:02:22.827105       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [f76b0584a01a] <==
	W0719 19:02:05.446386       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 19:02:05.446403       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 19:02:05.446441       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 19:02:05.446464       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 19:02:05.446488       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 19:02:05.446517       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 19:02:05.446559       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 19:02:05.446583       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 19:02:05.446621       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 19:02:05.446641       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 19:02:05.446666       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 19:02:05.446715       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 19:02:05.446770       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 19:02:05.446804       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 19:02:06.272201       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 19:02:06.272219       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 19:02:06.324162       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 19:02:06.324176       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 19:02:06.408807       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 19:02:06.408837       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 19:02:06.440906       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 19:02:06.440929       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 19:02:06.456624       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 19:02:06.456710       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0719 19:02:06.839265       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Fri 2024-07-19 18:55:49 UTC, ends at Fri 2024-07-19 19:06:25 UTC. --
	Jul 19 19:02:09 running-upgrade-589000 kubelet[12268]: E0719 19:02:09.579289   12268 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-589000\" already exists" pod="kube-system/etcd-running-upgrade-589000"
	Jul 19 19:02:09 running-upgrade-589000 kubelet[12268]: E0719 19:02:09.767841   12268 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-589000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-589000"
	Jul 19 19:02:09 running-upgrade-589000 kubelet[12268]: E0719 19:02:09.973438   12268 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-589000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-589000"
	Jul 19 19:02:10 running-upgrade-589000 kubelet[12268]: I0719 19:02:10.165963   12268 request.go:601] Waited for 1.117848228s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jul 19 19:02:10 running-upgrade-589000 kubelet[12268]: E0719 19:02:10.169874   12268 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-589000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-589000"
	Jul 19 19:02:20 running-upgrade-589000 kubelet[12268]: I0719 19:02:20.797476   12268 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 19 19:02:20 running-upgrade-589000 kubelet[12268]: I0719 19:02:20.797801   12268 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 19 19:02:20 running-upgrade-589000 kubelet[12268]: I0719 19:02:20.834222   12268 topology_manager.go:200] "Topology Admit Handler"
	Jul 19 19:02:20 running-upgrade-589000 kubelet[12268]: I0719 19:02:20.998163   12268 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0182f4ad-5665-4968-a726-43155867781b-tmp\") pod \"storage-provisioner\" (UID: \"0182f4ad-5665-4968-a726-43155867781b\") " pod="kube-system/storage-provisioner"
	Jul 19 19:02:20 running-upgrade-589000 kubelet[12268]: I0719 19:02:20.998204   12268 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84bgf\" (UniqueName: \"kubernetes.io/projected/0182f4ad-5665-4968-a726-43155867781b-kube-api-access-84bgf\") pod \"storage-provisioner\" (UID: \"0182f4ad-5665-4968-a726-43155867781b\") " pod="kube-system/storage-provisioner"
	Jul 19 19:02:21 running-upgrade-589000 kubelet[12268]: I0719 19:02:21.654030   12268 topology_manager.go:200] "Topology Admit Handler"
	Jul 19 19:02:21 running-upgrade-589000 kubelet[12268]: I0719 19:02:21.802241   12268 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/294e59a5-afff-4cf6-a346-7652b54a9b24-lib-modules\") pod \"kube-proxy-bmphc\" (UID: \"294e59a5-afff-4cf6-a346-7652b54a9b24\") " pod="kube-system/kube-proxy-bmphc"
	Jul 19 19:02:21 running-upgrade-589000 kubelet[12268]: I0719 19:02:21.802463   12268 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clcc5\" (UniqueName: \"kubernetes.io/projected/294e59a5-afff-4cf6-a346-7652b54a9b24-kube-api-access-clcc5\") pod \"kube-proxy-bmphc\" (UID: \"294e59a5-afff-4cf6-a346-7652b54a9b24\") " pod="kube-system/kube-proxy-bmphc"
	Jul 19 19:02:21 running-upgrade-589000 kubelet[12268]: I0719 19:02:21.802497   12268 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/294e59a5-afff-4cf6-a346-7652b54a9b24-kube-proxy\") pod \"kube-proxy-bmphc\" (UID: \"294e59a5-afff-4cf6-a346-7652b54a9b24\") " pod="kube-system/kube-proxy-bmphc"
	Jul 19 19:02:21 running-upgrade-589000 kubelet[12268]: I0719 19:02:21.802523   12268 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/294e59a5-afff-4cf6-a346-7652b54a9b24-xtables-lock\") pod \"kube-proxy-bmphc\" (UID: \"294e59a5-afff-4cf6-a346-7652b54a9b24\") " pod="kube-system/kube-proxy-bmphc"
	Jul 19 19:02:21 running-upgrade-589000 kubelet[12268]: I0719 19:02:21.804128   12268 topology_manager.go:200] "Topology Admit Handler"
	Jul 19 19:02:21 running-upgrade-589000 kubelet[12268]: I0719 19:02:21.807325   12268 topology_manager.go:200] "Topology Admit Handler"
	Jul 19 19:02:22 running-upgrade-589000 kubelet[12268]: I0719 19:02:22.004017   12268 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnvbl\" (UniqueName: \"kubernetes.io/projected/0c002718-b81f-4074-ad9f-3350805d7396-kube-api-access-gnvbl\") pod \"coredns-6d4b75cb6d-4k7xn\" (UID: \"0c002718-b81f-4074-ad9f-3350805d7396\") " pod="kube-system/coredns-6d4b75cb6d-4k7xn"
	Jul 19 19:02:22 running-upgrade-589000 kubelet[12268]: I0719 19:02:22.004067   12268 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c002718-b81f-4074-ad9f-3350805d7396-config-volume\") pod \"coredns-6d4b75cb6d-4k7xn\" (UID: \"0c002718-b81f-4074-ad9f-3350805d7396\") " pod="kube-system/coredns-6d4b75cb6d-4k7xn"
	Jul 19 19:02:22 running-upgrade-589000 kubelet[12268]: I0719 19:02:22.004083   12268 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7abb2601-a0fb-43f8-a883-4b7ae721ce17-config-volume\") pod \"coredns-6d4b75cb6d-fn56c\" (UID: \"7abb2601-a0fb-43f8-a883-4b7ae721ce17\") " pod="kube-system/coredns-6d4b75cb6d-fn56c"
	Jul 19 19:02:22 running-upgrade-589000 kubelet[12268]: I0719 19:02:22.004106   12268 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjlvp\" (UniqueName: \"kubernetes.io/projected/7abb2601-a0fb-43f8-a883-4b7ae721ce17-kube-api-access-pjlvp\") pod \"coredns-6d4b75cb6d-fn56c\" (UID: \"7abb2601-a0fb-43f8-a883-4b7ae721ce17\") " pod="kube-system/coredns-6d4b75cb6d-fn56c"
	Jul 19 19:02:23 running-upgrade-589000 kubelet[12268]: I0719 19:02:23.208947   12268 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="c56f5c529b4f63b45e29b8f11ff44d530658b4d5ba109be597289425b1e41d42"
	Jul 19 19:02:23 running-upgrade-589000 kubelet[12268]: I0719 19:02:23.222913   12268 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="5a338152626bd9eb3f9342e26b1d28c0814bbe940feb07eccde20d7e88988549"
	Jul 19 19:06:10 running-upgrade-589000 kubelet[12268]: I0719 19:06:10.560501   12268 scope.go:110] "RemoveContainer" containerID="18f4d73ea1b1f2d13a349d71d77c4b69218e071faa47460f87266ba6f234643b"
	Jul 19 19:06:10 running-upgrade-589000 kubelet[12268]: I0719 19:06:10.575031   12268 scope.go:110] "RemoveContainer" containerID="15a4c09ca72be08517874fc281e99e885c1340cacf16f964af9f8628d97c1419"
	
	
	==> storage-provisioner [0e2e0fca042e] <==
	I0719 19:02:21.326734       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 19:02:21.330336       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 19:02:21.330413       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 19:02:21.333204       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 19:02:21.333272       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-589000_e9bccd9c-1883-4956-9ed9-7968d4880af0!
	I0719 19:02:21.333620       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"61067275-6b4c-42fe-8763-82d621ce0b7f", APIVersion:"v1", ResourceVersion:"323", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-589000_e9bccd9c-1883-4956-9ed9-7968d4880af0 became leader
	I0719 19:02:21.434565       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-589000_e9bccd9c-1883-4956-9ed9-7968d4880af0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-589000 -n running-upgrade-589000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-589000 -n running-upgrade-589000: exit status 2 (15.658819417s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-589000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-589000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-589000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-589000: (1.257054041s)
--- FAIL: TestRunningBinaryUpgrade (690.04s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.22s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-620000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-620000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.832200625s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-620000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-620000" primary control-plane node in "kubernetes-upgrade-620000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-620000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:58:12.669599    4153 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:58:12.669817    4153 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:58:12.669824    4153 out.go:304] Setting ErrFile to fd 2...
	I0719 11:58:12.669826    4153 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:58:12.669971    4153 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:58:12.671237    4153 out.go:298] Setting JSON to false
	I0719 11:58:12.687745    4153 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3455,"bootTime":1721412037,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 11:58:12.687829    4153 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:58:12.691751    4153 out.go:177] * [kubernetes-upgrade-620000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 11:58:12.699665    4153 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 11:58:12.699771    4153 notify.go:220] Checking for updates...
	I0719 11:58:12.706877    4153 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:58:12.709768    4153 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 11:58:12.712781    4153 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:58:12.715702    4153 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 11:58:12.718683    4153 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 11:58:12.722019    4153 config.go:182] Loaded profile config "multinode-281000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:58:12.722089    4153 config.go:182] Loaded profile config "running-upgrade-589000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 11:58:12.722135    4153 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:58:12.726754    4153 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 11:58:12.733694    4153 start.go:297] selected driver: qemu2
	I0719 11:58:12.733700    4153 start.go:901] validating driver "qemu2" against <nil>
	I0719 11:58:12.733706    4153 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 11:58:12.735883    4153 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 11:58:12.738695    4153 out.go:177] * Automatically selected the socket_vmnet network
	I0719 11:58:12.741719    4153 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 11:58:12.741751    4153 cni.go:84] Creating CNI manager for ""
	I0719 11:58:12.741757    4153 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 11:58:12.741780    4153 start.go:340] cluster config:
	{Name:kubernetes-upgrade-620000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-620000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:58:12.745229    4153 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:58:12.752716    4153 out.go:177] * Starting "kubernetes-upgrade-620000" primary control-plane node in "kubernetes-upgrade-620000" cluster
	I0719 11:58:12.756713    4153 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 11:58:12.756727    4153 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0719 11:58:12.756740    4153 cache.go:56] Caching tarball of preloaded images
	I0719 11:58:12.756795    4153 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 11:58:12.756801    4153 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0719 11:58:12.756859    4153 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/kubernetes-upgrade-620000/config.json ...
	I0719 11:58:12.756871    4153 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/kubernetes-upgrade-620000/config.json: {Name:mkd549703e30b601aceddf33c4439d0d236fa440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 11:58:12.757154    4153 start.go:360] acquireMachinesLock for kubernetes-upgrade-620000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:58:12.757184    4153 start.go:364] duration metric: took 23.875µs to acquireMachinesLock for "kubernetes-upgrade-620000"
	I0719 11:58:12.757196    4153 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-620000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-620000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 11:58:12.757217    4153 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 11:58:12.765629    4153 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 11:58:12.781100    4153 start.go:159] libmachine.API.Create for "kubernetes-upgrade-620000" (driver="qemu2")
	I0719 11:58:12.781129    4153 client.go:168] LocalClient.Create starting
	I0719 11:58:12.781197    4153 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 11:58:12.781231    4153 main.go:141] libmachine: Decoding PEM data...
	I0719 11:58:12.781243    4153 main.go:141] libmachine: Parsing certificate...
	I0719 11:58:12.781278    4153 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 11:58:12.781301    4153 main.go:141] libmachine: Decoding PEM data...
	I0719 11:58:12.781309    4153 main.go:141] libmachine: Parsing certificate...
	I0719 11:58:12.781704    4153 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 11:58:12.928550    4153 main.go:141] libmachine: Creating SSH key...
	I0719 11:58:13.079356    4153 main.go:141] libmachine: Creating Disk image...
	I0719 11:58:13.079367    4153 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 11:58:13.079559    4153 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/disk.qcow2
	I0719 11:58:13.089480    4153 main.go:141] libmachine: STDOUT: 
	I0719 11:58:13.089501    4153 main.go:141] libmachine: STDERR: 
	I0719 11:58:13.089584    4153 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/disk.qcow2 +20000M
	I0719 11:58:13.098149    4153 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 11:58:13.098167    4153 main.go:141] libmachine: STDERR: 
	I0719 11:58:13.098190    4153 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/disk.qcow2
	I0719 11:58:13.098195    4153 main.go:141] libmachine: Starting QEMU VM...
	I0719 11:58:13.098207    4153 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:58:13.098231    4153 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:7b:72:f6:fb:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/disk.qcow2
	I0719 11:58:13.099933    4153 main.go:141] libmachine: STDOUT: 
	I0719 11:58:13.099959    4153 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:58:13.099975    4153 client.go:171] duration metric: took 318.847084ms to LocalClient.Create
	I0719 11:58:15.102027    4153 start.go:128] duration metric: took 2.344835667s to createHost
	I0719 11:58:15.102045    4153 start.go:83] releasing machines lock for "kubernetes-upgrade-620000", held for 2.344889166s
	W0719 11:58:15.102064    4153 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:58:15.110066    4153 out.go:177] * Deleting "kubernetes-upgrade-620000" in qemu2 ...
	W0719 11:58:15.122009    4153 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:58:15.122016    4153 start.go:729] Will try again in 5 seconds ...
	I0719 11:58:20.123005    4153 start.go:360] acquireMachinesLock for kubernetes-upgrade-620000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:58:20.123138    4153 start.go:364] duration metric: took 105.083µs to acquireMachinesLock for "kubernetes-upgrade-620000"
	I0719 11:58:20.123176    4153 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-620000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-620000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 11:58:20.123239    4153 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 11:58:20.131436    4153 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 11:58:20.147258    4153 start.go:159] libmachine.API.Create for "kubernetes-upgrade-620000" (driver="qemu2")
	I0719 11:58:20.147287    4153 client.go:168] LocalClient.Create starting
	I0719 11:58:20.147362    4153 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 11:58:20.147395    4153 main.go:141] libmachine: Decoding PEM data...
	I0719 11:58:20.147404    4153 main.go:141] libmachine: Parsing certificate...
	I0719 11:58:20.147438    4153 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 11:58:20.147461    4153 main.go:141] libmachine: Decoding PEM data...
	I0719 11:58:20.147467    4153 main.go:141] libmachine: Parsing certificate...
	I0719 11:58:20.147913    4153 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 11:58:20.288072    4153 main.go:141] libmachine: Creating SSH key...
	I0719 11:58:20.411341    4153 main.go:141] libmachine: Creating Disk image...
	I0719 11:58:20.411350    4153 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 11:58:20.411551    4153 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/disk.qcow2
	I0719 11:58:20.421254    4153 main.go:141] libmachine: STDOUT: 
	I0719 11:58:20.421276    4153 main.go:141] libmachine: STDERR: 
	I0719 11:58:20.421333    4153 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/disk.qcow2 +20000M
	I0719 11:58:20.429428    4153 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 11:58:20.429443    4153 main.go:141] libmachine: STDERR: 
	I0719 11:58:20.429457    4153 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/disk.qcow2
	I0719 11:58:20.429462    4153 main.go:141] libmachine: Starting QEMU VM...
	I0719 11:58:20.429477    4153 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:58:20.429508    4153 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:d1:6c:46:2c:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/disk.qcow2
	I0719 11:58:20.431231    4153 main.go:141] libmachine: STDOUT: 
	I0719 11:58:20.431248    4153 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:58:20.431261    4153 client.go:171] duration metric: took 283.974625ms to LocalClient.Create
	I0719 11:58:22.433373    4153 start.go:128] duration metric: took 2.310149917s to createHost
	I0719 11:58:22.433412    4153 start.go:83] releasing machines lock for "kubernetes-upgrade-620000", held for 2.310297292s
	W0719 11:58:22.433676    4153 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-620000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-620000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:58:22.444335    4153 out.go:177] 
	W0719 11:58:22.449505    4153 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 11:58:22.449518    4153 out.go:239] * 
	* 
	W0719 11:58:22.450633    4153 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 11:58:22.464422    4153 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-620000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-620000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-620000: (2.984621084s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-620000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-620000 status --format={{.Host}}: exit status 7 (51.9235ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-620000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
E0719 11:58:27.270116    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-620000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.176870167s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-620000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-620000" primary control-plane node in "kubernetes-upgrade-620000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-620000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-620000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:58:25.541613    4187 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:58:25.541747    4187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:58:25.541750    4187 out.go:304] Setting ErrFile to fd 2...
	I0719 11:58:25.541753    4187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:58:25.541882    4187 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:58:25.542932    4187 out.go:298] Setting JSON to false
	I0719 11:58:25.559346    4187 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3468,"bootTime":1721412037,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 11:58:25.559417    4187 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:58:25.564603    4187 out.go:177] * [kubernetes-upgrade-620000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 11:58:25.570518    4187 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 11:58:25.570585    4187 notify.go:220] Checking for updates...
	I0719 11:58:25.577462    4187 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:58:25.580525    4187 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 11:58:25.583526    4187 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:58:25.584998    4187 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 11:58:25.588484    4187 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 11:58:25.591777    4187 config.go:182] Loaded profile config "kubernetes-upgrade-620000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0719 11:58:25.592052    4187 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:58:25.596317    4187 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 11:58:25.603545    4187 start.go:297] selected driver: qemu2
	I0719 11:58:25.603553    4187 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-620000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-620000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:58:25.603614    4187 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 11:58:25.606039    4187 cni.go:84] Creating CNI manager for ""
	I0719 11:58:25.606090    4187 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 11:58:25.606120    4187 start.go:340] cluster config:
	{Name:kubernetes-upgrade-620000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-620000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:58:25.609664    4187 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:58:25.617515    4187 out.go:177] * Starting "kubernetes-upgrade-620000" primary control-plane node in "kubernetes-upgrade-620000" cluster
	I0719 11:58:25.621557    4187 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 11:58:25.621573    4187 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0719 11:58:25.621587    4187 cache.go:56] Caching tarball of preloaded images
	I0719 11:58:25.621665    4187 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 11:58:25.621670    4187 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0719 11:58:25.621723    4187 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/kubernetes-upgrade-620000/config.json ...
	I0719 11:58:25.622119    4187 start.go:360] acquireMachinesLock for kubernetes-upgrade-620000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:58:25.622147    4187 start.go:364] duration metric: took 22.875µs to acquireMachinesLock for "kubernetes-upgrade-620000"
	I0719 11:58:25.622156    4187 start.go:96] Skipping create...Using existing machine configuration
	I0719 11:58:25.622163    4187 fix.go:54] fixHost starting: 
	I0719 11:58:25.622278    4187 fix.go:112] recreateIfNeeded on kubernetes-upgrade-620000: state=Stopped err=<nil>
	W0719 11:58:25.622286    4187 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 11:58:25.630489    4187 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-620000" ...
	I0719 11:58:25.634464    4187 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:58:25.634503    4187 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:d1:6c:46:2c:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/disk.qcow2
	I0719 11:58:25.636635    4187 main.go:141] libmachine: STDOUT: 
	I0719 11:58:25.636654    4187 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:58:25.636684    4187 fix.go:56] duration metric: took 14.52125ms for fixHost
	I0719 11:58:25.636690    4187 start.go:83] releasing machines lock for "kubernetes-upgrade-620000", held for 14.538208ms
	W0719 11:58:25.636695    4187 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 11:58:25.636730    4187 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:58:25.636734    4187 start.go:729] Will try again in 5 seconds ...
	I0719 11:58:30.638770    4187 start.go:360] acquireMachinesLock for kubernetes-upgrade-620000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:58:30.639109    4187 start.go:364] duration metric: took 273.917µs to acquireMachinesLock for "kubernetes-upgrade-620000"
	I0719 11:58:30.639177    4187 start.go:96] Skipping create...Using existing machine configuration
	I0719 11:58:30.639189    4187 fix.go:54] fixHost starting: 
	I0719 11:58:30.639635    4187 fix.go:112] recreateIfNeeded on kubernetes-upgrade-620000: state=Stopped err=<nil>
	W0719 11:58:30.639650    4187 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 11:58:30.643974    4187 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-620000" ...
	I0719 11:58:30.649891    4187 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:58:30.650029    4187 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:d1:6c:46:2c:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubernetes-upgrade-620000/disk.qcow2
	I0719 11:58:30.656581    4187 main.go:141] libmachine: STDOUT: 
	I0719 11:58:30.656621    4187 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 11:58:30.656675    4187 fix.go:56] duration metric: took 17.486042ms for fixHost
	I0719 11:58:30.656687    4187 start.go:83] releasing machines lock for "kubernetes-upgrade-620000", held for 17.563583ms
	W0719 11:58:30.656784    4187 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-620000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-620000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 11:58:30.663876    4187 out.go:177] 
	W0719 11:58:30.667961    4187 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 11:58:30.667977    4187 out.go:239] * 
	* 
	W0719 11:58:30.669273    4187 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 11:58:30.676932    4187 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-620000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-620000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-620000 version --output=json: exit status 1 (54.879666ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-620000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-19 11:58:30.745741 -0700 PDT m=+2736.081172959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-620000 -n kubernetes-upgrade-620000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-620000 -n kubernetes-upgrade-620000: exit status 7 (32.670458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-620000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-620000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-620000
--- FAIL: TestKubernetesUpgrade (18.22s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.74s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19307
- KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1655697862/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.74s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.29s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19307
- KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3920803040/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (574.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3867635286 start -p stopped-upgrade-275000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3867635286 start -p stopped-upgrade-275000 --memory=2200 --vm-driver=qemu2 : (40.914516667s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3867635286 -p stopped-upgrade-275000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3867635286 -p stopped-upgrade-275000 stop: (12.11482075s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-275000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0719 12:01:30.335218    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
E0719 12:01:49.512981    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
E0719 12:03:27.265779    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-275000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.306934125s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-275000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-275000" primary control-plane node in "stopped-upgrade-275000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-275000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:59:24.825527    4225 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:59:24.825680    4225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:59:24.825688    4225 out.go:304] Setting ErrFile to fd 2...
	I0719 11:59:24.825691    4225 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:59:24.825889    4225 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:59:24.827131    4225 out.go:298] Setting JSON to false
	I0719 11:59:24.846668    4225 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3527,"bootTime":1721412037,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 11:59:24.846749    4225 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:59:24.852125    4225 out.go:177] * [stopped-upgrade-275000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 11:59:24.859121    4225 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 11:59:24.859173    4225 notify.go:220] Checking for updates...
	I0719 11:59:24.865075    4225 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:59:24.866142    4225 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 11:59:24.869052    4225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:59:24.872063    4225 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 11:59:24.875080    4225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 11:59:24.878310    4225 config.go:182] Loaded profile config "stopped-upgrade-275000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 11:59:24.881034    4225 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0719 11:59:24.884089    4225 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:59:24.888086    4225 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 11:59:24.895064    4225 start.go:297] selected driver: qemu2
	I0719 11:59:24.895069    4225 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-275000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-275000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0719 11:59:24.895110    4225 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 11:59:24.897621    4225 cni.go:84] Creating CNI manager for ""
	I0719 11:59:24.897637    4225 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 11:59:24.897677    4225 start.go:340] cluster config:
	{Name:stopped-upgrade-275000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-275000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0719 11:59:24.897725    4225 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:59:24.905044    4225 out.go:177] * Starting "stopped-upgrade-275000" primary control-plane node in "stopped-upgrade-275000" cluster
	I0719 11:59:24.907997    4225 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0719 11:59:24.908010    4225 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0719 11:59:24.908016    4225 cache.go:56] Caching tarball of preloaded images
	I0719 11:59:24.908074    4225 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 11:59:24.908079    4225 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0719 11:59:24.908125    4225 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/config.json ...
	I0719 11:59:24.908507    4225 start.go:360] acquireMachinesLock for stopped-upgrade-275000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 11:59:24.908537    4225 start.go:364] duration metric: took 24.959µs to acquireMachinesLock for "stopped-upgrade-275000"
	I0719 11:59:24.908546    4225 start.go:96] Skipping create...Using existing machine configuration
	I0719 11:59:24.908550    4225 fix.go:54] fixHost starting: 
	I0719 11:59:24.908646    4225 fix.go:112] recreateIfNeeded on stopped-upgrade-275000: state=Stopped err=<nil>
	W0719 11:59:24.908655    4225 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 11:59:24.915906    4225 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-275000" ...
	I0719 11:59:24.920089    4225 qemu.go:418] Using hvf for hardware acceleration
	I0719 11:59:24.920149    4225 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/stopped-upgrade-275000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/stopped-upgrade-275000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/stopped-upgrade-275000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50502-:22,hostfwd=tcp::50503-:2376,hostname=stopped-upgrade-275000 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/stopped-upgrade-275000/disk.qcow2
	I0719 11:59:24.964919    4225 main.go:141] libmachine: STDOUT: 
	I0719 11:59:24.964947    4225 main.go:141] libmachine: STDERR: 
	I0719 11:59:24.964953    4225 main.go:141] libmachine: Waiting for VM to start (ssh -p 50502 docker@127.0.0.1)...
	I0719 11:59:44.748079    4225 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/config.json ...
	I0719 11:59:44.748325    4225 machine.go:94] provisionDockerMachine start ...
	I0719 11:59:44.748377    4225 main.go:141] libmachine: Using SSH client type: native
	I0719 11:59:44.748515    4225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1006f2a10] 0x1006f5270 <nil>  [] 0s} localhost 50502 <nil> <nil>}
	I0719 11:59:44.748520    4225 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 11:59:44.803216    4225 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 11:59:44.803237    4225 buildroot.go:166] provisioning hostname "stopped-upgrade-275000"
	I0719 11:59:44.803285    4225 main.go:141] libmachine: Using SSH client type: native
	I0719 11:59:44.803406    4225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1006f2a10] 0x1006f5270 <nil>  [] 0s} localhost 50502 <nil> <nil>}
	I0719 11:59:44.803413    4225 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-275000 && echo "stopped-upgrade-275000" | sudo tee /etc/hostname
	I0719 11:59:44.862550    4225 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-275000
	
	I0719 11:59:44.862606    4225 main.go:141] libmachine: Using SSH client type: native
	I0719 11:59:44.862725    4225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1006f2a10] 0x1006f5270 <nil>  [] 0s} localhost 50502 <nil> <nil>}
	I0719 11:59:44.862734    4225 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-275000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-275000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-275000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 11:59:44.918279    4225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 11:59:44.918291    4225 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19307-1066/.minikube CaCertPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19307-1066/.minikube}
	I0719 11:59:44.918305    4225 buildroot.go:174] setting up certificates
	I0719 11:59:44.918309    4225 provision.go:84] configureAuth start
	I0719 11:59:44.918313    4225 provision.go:143] copyHostCerts
	I0719 11:59:44.918386    4225 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1066/.minikube/cert.pem, removing ...
	I0719 11:59:44.918393    4225 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1066/.minikube/cert.pem
	I0719 11:59:44.918696    4225 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19307-1066/.minikube/cert.pem (1123 bytes)
	I0719 11:59:44.918924    4225 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1066/.minikube/key.pem, removing ...
	I0719 11:59:44.918928    4225 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1066/.minikube/key.pem
	I0719 11:59:44.918997    4225 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19307-1066/.minikube/key.pem (1679 bytes)
	I0719 11:59:44.919101    4225 exec_runner.go:144] found /Users/jenkins/minikube-integration/19307-1066/.minikube/ca.pem, removing ...
	I0719 11:59:44.919105    4225 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19307-1066/.minikube/ca.pem
	I0719 11:59:44.919154    4225 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19307-1066/.minikube/ca.pem (1082 bytes)
	I0719 11:59:44.919244    4225 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-275000 san=[127.0.0.1 localhost minikube stopped-upgrade-275000]
	I0719 11:59:45.105085    4225 provision.go:177] copyRemoteCerts
	I0719 11:59:45.105129    4225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 11:59:45.105138    4225 sshutil.go:53] new ssh client: &{IP:localhost Port:50502 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/stopped-upgrade-275000/id_rsa Username:docker}
	I0719 11:59:45.135465    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 11:59:45.142251    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0719 11:59:45.149336    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 11:59:45.156157    4225 provision.go:87] duration metric: took 237.842125ms to configureAuth
	I0719 11:59:45.156165    4225 buildroot.go:189] setting minikube options for container-runtime
	I0719 11:59:45.156273    4225 config.go:182] Loaded profile config "stopped-upgrade-275000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 11:59:45.156305    4225 main.go:141] libmachine: Using SSH client type: native
	I0719 11:59:45.156391    4225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1006f2a10] 0x1006f5270 <nil>  [] 0s} localhost 50502 <nil> <nil>}
	I0719 11:59:45.156395    4225 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 11:59:45.211513    4225 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 11:59:45.211522    4225 buildroot.go:70] root file system type: tmpfs
	I0719 11:59:45.211575    4225 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 11:59:45.211633    4225 main.go:141] libmachine: Using SSH client type: native
	I0719 11:59:45.211745    4225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1006f2a10] 0x1006f5270 <nil>  [] 0s} localhost 50502 <nil> <nil>}
	I0719 11:59:45.211777    4225 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 11:59:45.272077    4225 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 11:59:45.272125    4225 main.go:141] libmachine: Using SSH client type: native
	I0719 11:59:45.272244    4225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1006f2a10] 0x1006f5270 <nil>  [] 0s} localhost 50502 <nil> <nil>}
	I0719 11:59:45.272251    4225 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 11:59:45.615040    4225 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 11:59:45.615053    4225 machine.go:97] duration metric: took 866.734125ms to provisionDockerMachine
	I0719 11:59:45.615059    4225 start.go:293] postStartSetup for "stopped-upgrade-275000" (driver="qemu2")
	I0719 11:59:45.615065    4225 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 11:59:45.615120    4225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 11:59:45.615130    4225 sshutil.go:53] new ssh client: &{IP:localhost Port:50502 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/stopped-upgrade-275000/id_rsa Username:docker}
	I0719 11:59:45.644177    4225 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 11:59:45.645338    4225 info.go:137] Remote host: Buildroot 2021.02.12
	I0719 11:59:45.645347    4225 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19307-1066/.minikube/addons for local assets ...
	I0719 11:59:45.645440    4225 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19307-1066/.minikube/files for local assets ...
	I0719 11:59:45.645560    4225 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19307-1066/.minikube/files/etc/ssl/certs/15652.pem -> 15652.pem in /etc/ssl/certs
	I0719 11:59:45.645694    4225 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 11:59:45.648282    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/files/etc/ssl/certs/15652.pem --> /etc/ssl/certs/15652.pem (1708 bytes)
	I0719 11:59:45.655118    4225 start.go:296] duration metric: took 40.053375ms for postStartSetup
	I0719 11:59:45.655131    4225 fix.go:56] duration metric: took 20.746864s for fixHost
	I0719 11:59:45.655161    4225 main.go:141] libmachine: Using SSH client type: native
	I0719 11:59:45.655266    4225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1006f2a10] 0x1006f5270 <nil>  [] 0s} localhost 50502 <nil> <nil>}
	I0719 11:59:45.655270    4225 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0719 11:59:45.711912    4225 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721415585.978453962
	
	I0719 11:59:45.711926    4225 fix.go:216] guest clock: 1721415585.978453962
	I0719 11:59:45.711930    4225 fix.go:229] Guest: 2024-07-19 11:59:45.978453962 -0700 PDT Remote: 2024-07-19 11:59:45.655133 -0700 PDT m=+20.861452792 (delta=323.320962ms)
	I0719 11:59:45.711950    4225 fix.go:200] guest clock delta is within tolerance: 323.320962ms
	I0719 11:59:45.711952    4225 start.go:83] releasing machines lock for "stopped-upgrade-275000", held for 20.803694417s
	I0719 11:59:45.712025    4225 ssh_runner.go:195] Run: cat /version.json
	I0719 11:59:45.712037    4225 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 11:59:45.712035    4225 sshutil.go:53] new ssh client: &{IP:localhost Port:50502 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/stopped-upgrade-275000/id_rsa Username:docker}
	I0719 11:59:45.712056    4225 sshutil.go:53] new ssh client: &{IP:localhost Port:50502 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/stopped-upgrade-275000/id_rsa Username:docker}
	W0719 11:59:45.842045    4225 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0719 11:59:45.842104    4225 ssh_runner.go:195] Run: systemctl --version
	I0719 11:59:45.844224    4225 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 11:59:45.845863    4225 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 11:59:45.845890    4225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0719 11:59:45.848666    4225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0719 11:59:45.854032    4225 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 11:59:45.854041    4225 start.go:495] detecting cgroup driver to use...
	I0719 11:59:45.854115    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 11:59:45.865642    4225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0719 11:59:45.868918    4225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 11:59:45.871748    4225 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 11:59:45.871778    4225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 11:59:45.874747    4225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 11:59:45.877730    4225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 11:59:45.880648    4225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 11:59:45.884017    4225 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 11:59:45.886815    4225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 11:59:45.889730    4225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 11:59:45.894234    4225 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 11:59:45.897753    4225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 11:59:45.901034    4225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 11:59:45.904124    4225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 11:59:45.967363    4225 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 11:59:45.973805    4225 start.go:495] detecting cgroup driver to use...
	I0719 11:59:45.973892    4225 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 11:59:45.980306    4225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 11:59:45.984899    4225 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 11:59:45.994405    4225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 11:59:45.999600    4225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 11:59:46.004164    4225 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 11:59:46.057713    4225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 11:59:46.063046    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 11:59:46.069140    4225 ssh_runner.go:195] Run: which cri-dockerd
	I0719 11:59:46.070386    4225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 11:59:46.073547    4225 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 11:59:46.078500    4225 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 11:59:46.143276    4225 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 11:59:46.206699    4225 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 11:59:46.206757    4225 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 11:59:46.211964    4225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 11:59:46.277544    4225 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 11:59:47.431168    4225 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.15362275s)
	I0719 11:59:47.431224    4225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 11:59:47.436342    4225 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0719 11:59:47.442643    4225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 11:59:47.447570    4225 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 11:59:47.510545    4225 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 11:59:47.570483    4225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 11:59:47.636459    4225 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 11:59:47.641820    4225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 11:59:47.646413    4225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 11:59:47.710749    4225 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 11:59:47.751005    4225 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 11:59:47.751084    4225 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 11:59:47.753603    4225 start.go:563] Will wait 60s for crictl version
	I0719 11:59:47.753656    4225 ssh_runner.go:195] Run: which crictl
	I0719 11:59:47.755042    4225 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 11:59:47.769117    4225 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0719 11:59:47.769182    4225 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 11:59:47.788802    4225 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 11:59:47.806168    4225 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0719 11:59:47.806399    4225 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0719 11:59:47.807716    4225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 11:59:47.811267    4225 kubeadm.go:883] updating cluster {Name:stopped-upgrade-275000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-275000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0719 11:59:47.811319    4225 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0719 11:59:47.811371    4225 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 11:59:47.822279    4225 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 11:59:47.822288    4225 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0719 11:59:47.822334    4225 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 11:59:47.825514    4225 ssh_runner.go:195] Run: which lz4
	I0719 11:59:47.826767    4225 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0719 11:59:47.828014    4225 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 11:59:47.828024    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0719 11:59:48.771142    4225 docker.go:649] duration metric: took 944.418042ms to copy over tarball
	I0719 11:59:48.771199    4225 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 11:59:49.916025    4225 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.144829583s)
	I0719 11:59:49.916041    4225 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 11:59:49.931358    4225 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 11:59:49.934405    4225 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0719 11:59:49.939486    4225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 11:59:50.004249    4225 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 11:59:51.467680    4225 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.463435084s)
	I0719 11:59:51.467777    4225 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 11:59:51.480167    4225 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 11:59:51.480176    4225 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0719 11:59:51.480182    4225 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 11:59:51.485596    4225 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 11:59:51.487244    4225 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0719 11:59:51.489048    4225 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0719 11:59:51.489111    4225 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 11:59:51.490729    4225 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 11:59:51.493027    4225 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0719 11:59:51.493030    4225 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0719 11:59:51.493140    4225 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0719 11:59:51.495124    4225 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0719 11:59:51.495283    4225 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 11:59:51.496582    4225 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0719 11:59:51.496763    4225 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0719 11:59:51.497746    4225 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 11:59:51.498257    4225 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0719 11:59:51.498877    4225 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0719 11:59:51.499481    4225 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 11:59:51.908012    4225 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0719 11:59:51.918404    4225 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0719 11:59:51.918432    4225 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0719 11:59:51.918478    4225 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0719 11:59:51.926897    4225 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0719 11:59:51.929363    4225 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0719 11:59:51.935404    4225 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0719 11:59:51.945306    4225 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0719 11:59:51.945329    4225 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0719 11:59:51.945384    4225 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0719 11:59:51.949847    4225 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0719 11:59:51.951110    4225 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0719 11:59:51.951135    4225 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0719 11:59:51.951166    4225 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0719 11:59:51.956682    4225 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0719 11:59:51.961344    4225 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 11:59:51.962125    4225 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0719 11:59:51.968116    4225 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0719 11:59:51.968138    4225 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0719 11:59:51.968194    4225 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0719 11:59:51.969655    4225 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0719 11:59:51.969760    4225 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0719 11:59:51.979664    4225 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0719 11:59:51.979689    4225 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 11:59:51.979752    4225 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 11:59:51.984205    4225 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0719 11:59:51.984224    4225 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0719 11:59:51.984274    4225 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W0719 11:59:51.988154    4225 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0719 11:59:51.988279    4225 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0719 11:59:51.996887    4225 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0719 11:59:51.996912    4225 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0719 11:59:51.996930    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0719 11:59:51.996976    4225 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0719 11:59:52.025028    4225 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0719 11:59:52.025045    4225 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0719 11:59:52.025066    4225 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 11:59:52.025108    4225 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0719 11:59:52.025132    4225 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0719 11:59:52.068178    4225 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0719 11:59:52.068218    4225 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0719 11:59:52.068221    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0719 11:59:52.068321    4225 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0719 11:59:52.077887    4225 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0719 11:59:52.077904    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0719 11:59:52.081778    4225 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0719 11:59:52.081887    4225 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 11:59:52.092616    4225 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0719 11:59:52.092629    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0719 11:59:52.129652    4225 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0719 11:59:52.129676    4225 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 11:59:52.129730    4225 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 11:59:52.180624    4225 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0719 11:59:52.180648    4225 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0719 11:59:52.180654    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0719 11:59:52.203584    4225 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0719 11:59:52.203713    4225 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0719 11:59:52.297054    4225 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0719 11:59:52.297070    4225 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0719 11:59:52.297100    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0719 11:59:52.363290    4225 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0719 11:59:52.363320    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0719 11:59:52.487664    4225 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0719 11:59:52.487690    4225 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0719 11:59:52.487697    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0719 11:59:52.723640    4225 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0719 11:59:52.723691    4225 cache_images.go:92] duration metric: took 1.243516667s to LoadCachedImages
	W0719 11:59:52.723736    4225 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0719 11:59:52.723742    4225 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0719 11:59:52.723792    4225 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-275000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-275000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 11:59:52.723858    4225 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 11:59:52.737901    4225 cni.go:84] Creating CNI manager for ""
	I0719 11:59:52.737916    4225 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 11:59:52.737921    4225 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 11:59:52.737930    4225 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-275000 NodeName:stopped-upgrade-275000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 11:59:52.737988    4225 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-275000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 11:59:52.738047    4225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0719 11:59:52.741560    4225 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 11:59:52.741591    4225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 11:59:52.744517    4225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0719 11:59:52.749490    4225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 11:59:52.754551    4225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0719 11:59:52.761297    4225 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0719 11:59:52.762561    4225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 11:59:52.766145    4225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 11:59:52.835575    4225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 11:59:52.845900    4225 certs.go:68] Setting up /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000 for IP: 10.0.2.15
	I0719 11:59:52.845911    4225 certs.go:194] generating shared ca certs ...
	I0719 11:59:52.845920    4225 certs.go:226] acquiring lock for ca certs: {Name:mk315b805d576c08b7c87d345baabbe459ef4715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 11:59:52.846098    4225 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/ca.key
	I0719 11:59:52.846151    4225 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/proxy-client-ca.key
	I0719 11:59:52.846156    4225 certs.go:256] generating profile certs ...
	I0719 11:59:52.846217    4225 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/client.key
	I0719 11:59:52.846238    4225 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.key.ef0f4bd6
	I0719 11:59:52.846250    4225 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.crt.ef0f4bd6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0719 11:59:52.970195    4225 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.crt.ef0f4bd6 ...
	I0719 11:59:52.970209    4225 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.crt.ef0f4bd6: {Name:mk8106679c8ec9d10f63c1edbf0c3509686f0e48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 11:59:52.970551    4225 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.key.ef0f4bd6 ...
	I0719 11:59:52.970557    4225 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.key.ef0f4bd6: {Name:mk601f4ec21661ecc272a2663420b49625baa029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 11:59:52.970707    4225 certs.go:381] copying /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.crt.ef0f4bd6 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.crt
	I0719 11:59:52.971377    4225 certs.go:385] copying /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.key.ef0f4bd6 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.key
	I0719 11:59:52.971542    4225 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/proxy-client.key
	I0719 11:59:52.971686    4225 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/1565.pem (1338 bytes)
	W0719 11:59:52.971718    4225 certs.go:480] ignoring /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/1565_empty.pem, impossibly tiny 0 bytes
	I0719 11:59:52.971724    4225 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 11:59:52.971744    4225 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem (1082 bytes)
	I0719 11:59:52.971763    4225 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem (1123 bytes)
	I0719 11:59:52.971781    4225 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/key.pem (1679 bytes)
	I0719 11:59:52.971820    4225 certs.go:484] found cert: /Users/jenkins/minikube-integration/19307-1066/.minikube/files/etc/ssl/certs/15652.pem (1708 bytes)
	I0719 11:59:52.972127    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 11:59:52.979000    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 11:59:52.986022    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 11:59:52.993410    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0719 11:59:53.000867    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 11:59:53.008114    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 11:59:53.014613    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 11:59:53.021982    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 11:59:53.029885    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/1565.pem --> /usr/share/ca-certificates/1565.pem (1338 bytes)
	I0719 11:59:53.037331    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/files/etc/ssl/certs/15652.pem --> /usr/share/ca-certificates/15652.pem (1708 bytes)
	I0719 11:59:53.044419    4225 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19307-1066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 11:59:53.050948    4225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 11:59:53.056102    4225 ssh_runner.go:195] Run: openssl version
	I0719 11:59:53.057958    4225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15652.pem && ln -fs /usr/share/ca-certificates/15652.pem /etc/ssl/certs/15652.pem"
	I0719 11:59:53.061032    4225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15652.pem
	I0719 11:59:53.062421    4225 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 18:20 /usr/share/ca-certificates/15652.pem
	I0719 11:59:53.062442    4225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15652.pem
	I0719 11:59:53.064193    4225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15652.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 11:59:53.066939    4225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 11:59:53.070178    4225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 11:59:53.071747    4225 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 18:13 /usr/share/ca-certificates/minikubeCA.pem
	I0719 11:59:53.071763    4225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 11:59:53.073548    4225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 11:59:53.076740    4225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1565.pem && ln -fs /usr/share/ca-certificates/1565.pem /etc/ssl/certs/1565.pem"
	I0719 11:59:53.079554    4225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1565.pem
	I0719 11:59:53.080884    4225 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 18:20 /usr/share/ca-certificates/1565.pem
	I0719 11:59:53.080898    4225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1565.pem
	I0719 11:59:53.082753    4225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1565.pem /etc/ssl/certs/51391683.0"
	I0719 11:59:53.086035    4225 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 11:59:53.087571    4225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 11:59:53.089455    4225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 11:59:53.091501    4225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 11:59:53.093419    4225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 11:59:53.095394    4225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 11:59:53.097157    4225 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 11:59:53.099022    4225 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-275000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-275000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0719 11:59:53.099091    4225 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 11:59:53.110355    4225 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 11:59:53.113792    4225 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 11:59:53.113798    4225 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 11:59:53.113821    4225 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 11:59:53.116729    4225 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 11:59:53.117024    4225 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-275000" does not appear in /Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:59:53.117123    4225 kubeconfig.go:62] /Users/jenkins/minikube-integration/19307-1066/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-275000" cluster setting kubeconfig missing "stopped-upgrade-275000" context setting]
	I0719 11:59:53.117393    4225 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/kubeconfig: {Name:mk4dabaac160a2c10ee03f7aa88bffdd6270bcb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 11:59:53.117810    4225 kapi.go:59] client config for stopped-upgrade-275000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/client.key", CAFile:"/Users/jenkins/minikube-integration/19307-1066/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101a87790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 11:59:53.118122    4225 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 11:59:53.120903    4225 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-275000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0719 11:59:53.120908    4225 kubeadm.go:1160] stopping kube-system containers ...
	I0719 11:59:53.120942    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 11:59:53.131835    4225 docker.go:483] Stopping containers: [88b7f06c953c 02a941fd8e55 f46177018be0 11f4036961d9 8db569ae2b3e 3e008b48c13a 0f3bce8296ce 79c60209a5a1]
	I0719 11:59:53.131903    4225 ssh_runner.go:195] Run: docker stop 88b7f06c953c 02a941fd8e55 f46177018be0 11f4036961d9 8db569ae2b3e 3e008b48c13a 0f3bce8296ce 79c60209a5a1
	I0719 11:59:53.142599    4225 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 11:59:53.148533    4225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 11:59:53.151188    4225 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 11:59:53.151193    4225 kubeadm.go:157] found existing configuration files:
	
	I0719 11:59:53.151216    4225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf
	I0719 11:59:53.153892    4225 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 11:59:53.153914    4225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 11:59:53.156865    4225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf
	I0719 11:59:53.159268    4225 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 11:59:53.159288    4225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 11:59:53.162082    4225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf
	I0719 11:59:53.164966    4225 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 11:59:53.164987    4225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 11:59:53.167669    4225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf
	I0719 11:59:53.170197    4225 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 11:59:53.170220    4225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 11:59:53.173131    4225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 11:59:53.175736    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 11:59:53.197134    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 11:59:53.514199    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 11:59:53.624513    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 11:59:53.652882    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 11:59:53.675376    4225 api_server.go:52] waiting for apiserver process to appear ...
	I0719 11:59:53.675466    4225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 11:59:54.177221    4225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 11:59:54.676607    4225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 11:59:54.680962    4225 api_server.go:72] duration metric: took 1.005603958s to wait for apiserver process to appear ...
	I0719 11:59:54.680970    4225 api_server.go:88] waiting for apiserver healthz status ...
	I0719 11:59:54.680979    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 11:59:59.683149    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 11:59:59.683234    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:04.684253    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:04.684328    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:09.685416    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:09.685543    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:14.686875    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:14.686897    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:19.688155    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:19.688240    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:24.690718    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:24.690759    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:29.693087    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:29.693158    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:34.695605    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:34.695652    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:39.695965    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:39.696031    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:44.698412    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:44.698454    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:49.700606    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:49.700622    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:00:54.702689    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:00:54.702822    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:00:54.720188    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:00:54.720288    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:00:54.730940    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:00:54.731003    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:00:54.741578    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:00:54.741649    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:00:54.752453    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:00:54.752523    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:00:54.767080    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:00:54.767141    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:00:54.777687    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:00:54.777753    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:00:54.788001    4225 logs.go:276] 0 containers: []
	W0719 12:00:54.788012    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:00:54.788062    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:00:54.798540    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:00:54.798555    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:00:54.798560    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:00:54.824626    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:00:54.824635    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:00:54.836431    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:00:54.836442    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:00:54.883849    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:00:54.883868    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:00:54.909361    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:00:54.909375    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:00:54.928766    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:00:54.928786    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:00:54.945413    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:00:54.945431    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:00:54.956880    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:00:54.956895    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:00:54.961399    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:00:54.961406    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:00:55.065338    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:00:55.065353    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:00:55.079569    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:00:55.079583    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:00:55.098111    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:00:55.098121    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:00:55.139025    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:00:55.139033    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:00:55.153605    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:00:55.153621    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:00:55.165071    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:00:55.165086    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:00:55.176324    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:00:55.176338    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:00:55.192083    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:00:55.192097    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:00:57.705199    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:02.707403    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:02.707574    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:02.728268    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:01:02.728356    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:02.742800    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:01:02.742874    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:02.754592    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:01:02.754662    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:02.764944    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:01:02.765011    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:02.776604    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:01:02.776675    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:02.792384    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:01:02.792461    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:02.807168    4225 logs.go:276] 0 containers: []
	W0719 12:01:02.807179    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:02.807234    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:02.818049    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:01:02.818068    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:01:02.818073    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:01:02.829979    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:01:02.829994    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:01:02.844146    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:01:02.844156    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:01:02.855583    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:01:02.855597    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:01:02.866749    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:01:02.866764    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:01:02.878501    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:01:02.878512    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:02.890291    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:01:02.890302    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:01:02.908793    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:01:02.908807    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:01:02.920736    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:01:02.920748    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:01:02.932533    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:02.932544    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:02.958523    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:02.958531    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:02.962598    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:01:02.962605    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:01:02.980030    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:01:02.980044    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:01:02.994641    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:01:02.994652    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:01:03.033057    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:03.033072    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:03.069430    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:03.069436    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:03.109734    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:01:03.109747    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:01:05.627607    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:10.629848    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:10.630095    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:10.655682    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:01:10.655803    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:10.672734    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:01:10.672814    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:10.686877    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:01:10.686952    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:10.702842    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:01:10.702913    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:10.715619    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:01:10.715691    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:10.726878    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:01:10.726952    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:10.736467    4225 logs.go:276] 0 containers: []
	W0719 12:01:10.736479    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:10.736534    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:10.746650    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:01:10.746668    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:10.746674    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:10.750799    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:01:10.750807    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:01:10.788865    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:01:10.788877    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:01:10.803923    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:10.803934    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:10.839922    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:01:10.839933    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:01:10.858864    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:01:10.858875    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:01:10.875177    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:01:10.875188    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:10.887118    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:01:10.887129    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:01:10.901470    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:01:10.901480    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:01:10.915464    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:01:10.915475    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:01:10.929419    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:01:10.929431    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:01:10.944102    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:01:10.944112    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:01:10.955849    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:10.955860    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:10.979959    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:10.979967    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:11.015974    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:01:11.015981    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:01:11.027504    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:01:11.027515    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:01:11.041291    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:01:11.041302    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:01:13.553970    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:18.556122    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:18.556289    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:18.578442    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:01:18.578507    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:18.589186    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:01:18.589255    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:18.599446    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:01:18.599523    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:18.609941    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:01:18.610020    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:18.620643    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:01:18.620705    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:18.630994    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:01:18.631057    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:18.640934    4225 logs.go:276] 0 containers: []
	W0719 12:01:18.640947    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:18.640994    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:18.651301    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:01:18.651322    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:18.651328    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:18.688448    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:01:18.688458    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:01:18.702702    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:01:18.702713    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:01:18.715750    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:01:18.715759    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:18.728109    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:18.728120    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:18.762573    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:01:18.762584    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:01:18.800959    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:01:18.800971    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:01:18.812472    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:01:18.812484    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:01:18.824058    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:18.824070    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:18.828745    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:01:18.828754    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:01:18.842810    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:01:18.842821    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:01:18.857184    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:01:18.857196    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:01:18.868947    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:01:18.868961    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:01:18.885097    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:01:18.885107    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:01:18.896629    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:01:18.896639    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:01:18.916374    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:01:18.916385    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:01:18.928341    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:18.928352    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:21.454333    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:26.456583    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:26.456780    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:26.475429    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:01:26.475502    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:26.487488    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:01:26.487561    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:26.497549    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:01:26.497621    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:26.507955    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:01:26.508028    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:26.519768    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:01:26.519835    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:26.530419    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:01:26.530484    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:26.540334    4225 logs.go:276] 0 containers: []
	W0719 12:01:26.540355    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:26.540411    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:26.550904    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:01:26.550926    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:01:26.550931    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:01:26.568528    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:01:26.568538    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:01:26.606634    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:01:26.606646    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:01:26.621337    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:01:26.621347    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:01:26.632327    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:01:26.632339    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:26.643841    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:01:26.643851    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:01:26.659143    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:01:26.659155    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:01:26.673868    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:01:26.673880    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:01:26.685422    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:01:26.685433    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:01:26.697002    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:01:26.697012    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:01:26.712481    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:26.712496    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:26.750580    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:26.750589    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:26.754874    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:01:26.754883    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:01:26.766163    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:01:26.766173    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:01:26.783618    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:26.783629    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:26.808781    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:26.808788    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:26.843697    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:01:26.843710    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:01:29.356753    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:34.358884    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:34.359073    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:34.380250    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:01:34.380358    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:34.394988    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:01:34.395066    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:34.407575    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:01:34.407640    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:34.418079    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:01:34.418148    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:34.429002    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:01:34.429068    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:34.440014    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:01:34.440083    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:34.450581    4225 logs.go:276] 0 containers: []
	W0719 12:01:34.450592    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:34.450647    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:34.461323    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:01:34.461342    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:34.461348    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:34.497979    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:34.497999    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:34.536912    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:01:34.536924    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:01:34.551402    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:01:34.551413    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:01:34.565637    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:01:34.565648    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:01:34.581595    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:01:34.581605    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:01:34.592652    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:01:34.592663    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:34.605662    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:01:34.605674    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:01:34.624057    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:01:34.624067    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:01:34.664104    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:01:34.664117    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:01:34.678439    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:01:34.678451    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:01:34.690482    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:01:34.690494    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:01:34.702129    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:01:34.702140    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:01:34.714192    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:34.714206    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:34.738120    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:34.738129    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:34.742256    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:01:34.742262    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:01:34.753100    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:01:34.753112    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:01:37.277428    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:42.279641    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:42.280061    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:42.316668    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:01:42.316808    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:42.336707    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:01:42.336802    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:42.350720    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:01:42.350798    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:42.362938    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:01:42.363010    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:42.374629    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:01:42.374691    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:42.387117    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:01:42.387182    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:42.398225    4225 logs.go:276] 0 containers: []
	W0719 12:01:42.398237    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:42.398288    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:42.409333    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:01:42.409353    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:01:42.409359    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:01:42.422432    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:01:42.422445    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:01:42.434623    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:01:42.434637    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:01:42.446317    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:01:42.446329    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:01:42.458424    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:01:42.458436    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:01:42.469918    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:42.469928    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:42.508227    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:42.508242    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:42.513954    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:42.513964    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:42.549358    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:01:42.549372    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:01:42.571839    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:01:42.571854    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:01:42.585725    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:01:42.585739    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:01:42.623111    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:42.623123    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:42.648267    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:01:42.648276    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:42.661669    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:01:42.661683    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:01:42.675769    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:01:42.675796    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:01:42.695667    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:01:42.695679    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:01:42.709565    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:01:42.709580    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:01:45.229274    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:50.231414    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:50.231615    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:50.249123    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:01:50.249211    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:50.262213    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:01:50.262287    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:50.273990    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:01:50.274050    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:50.284773    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:01:50.284834    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:50.295244    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:01:50.295312    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:50.306415    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:01:50.306482    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:50.320159    4225 logs.go:276] 0 containers: []
	W0719 12:01:50.320173    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:50.320236    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:50.331144    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:01:50.331161    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:01:50.331167    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:01:50.342589    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:01:50.342598    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:01:50.354076    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:01:50.354089    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:01:50.365642    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:50.365654    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:50.402215    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:01:50.402231    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:01:50.416575    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:01:50.416586    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:01:50.456000    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:01:50.456012    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:01:50.473503    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:01:50.473513    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:01:50.490900    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:01:50.490911    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:01:50.502295    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:01:50.502306    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:50.515215    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:50.515227    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:50.519435    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:01:50.519441    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:01:50.533625    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:01:50.533636    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:01:50.547898    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:01:50.547908    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:01:50.559873    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:50.559886    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:50.601180    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:01:50.601192    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:01:50.612936    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:50.612947    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:53.137305    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:01:58.139161    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:01:58.139376    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:01:58.157394    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:01:58.157482    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:01:58.172744    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:01:58.172815    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:01:58.184403    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:01:58.184474    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:01:58.194655    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:01:58.194721    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:01:58.204824    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:01:58.204892    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:01:58.226522    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:01:58.226597    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:01:58.237500    4225 logs.go:276] 0 containers: []
	W0719 12:01:58.237512    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:01:58.237572    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:01:58.248231    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:01:58.248251    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:01:58.248257    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:01:58.261736    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:01:58.261746    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:01:58.273104    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:01:58.273115    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:01:58.311728    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:01:58.311747    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:01:58.316147    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:01:58.316154    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:01:58.329670    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:01:58.329681    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:01:58.344185    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:01:58.344202    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:01:58.356553    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:01:58.356569    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:01:58.374757    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:01:58.374769    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:01:58.399614    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:01:58.399622    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:01:58.411499    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:01:58.411511    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:01:58.447867    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:01:58.447880    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:01:58.469847    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:01:58.469858    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:01:58.483893    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:01:58.483902    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:01:58.526254    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:01:58.526267    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:01:58.537527    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:01:58.537538    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:01:58.549113    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:01:58.549124    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:02:01.062769    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:06.065031    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:06.065394    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:02:06.095385    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:02:06.095518    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:02:06.115073    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:02:06.115168    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:02:06.133751    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:02:06.133825    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:02:06.145914    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:02:06.145976    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:02:06.156891    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:02:06.156954    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:02:06.167754    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:02:06.167823    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:02:06.178215    4225 logs.go:276] 0 containers: []
	W0719 12:02:06.178226    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:02:06.178281    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:02:06.189081    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:02:06.189103    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:02:06.189108    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:02:06.227135    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:02:06.227145    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:02:06.241022    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:02:06.241034    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:02:06.260352    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:02:06.260363    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:02:06.272885    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:02:06.272896    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:02:06.287275    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:02:06.287286    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:02:06.299487    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:02:06.299504    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:02:06.314298    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:02:06.314311    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:02:06.350370    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:02:06.350379    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:02:06.354447    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:02:06.354456    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:02:06.390950    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:02:06.390961    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:02:06.404785    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:02:06.404800    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:02:06.417333    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:02:06.417345    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:02:06.432760    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:02:06.432773    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:02:06.448783    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:02:06.448795    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:02:06.461392    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:02:06.461405    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:02:06.486110    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:02:06.486128    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:02:09.001710    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:14.004020    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:14.004330    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:02:14.035031    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:02:14.035153    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:02:14.054384    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:02:14.054480    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:02:14.068474    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:02:14.068549    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:02:14.081210    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:02:14.081283    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:02:14.094920    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:02:14.094992    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:02:14.105500    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:02:14.105576    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:02:14.115483    4225 logs.go:276] 0 containers: []
	W0719 12:02:14.115492    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:02:14.115540    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:02:14.125995    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:02:14.126014    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:02:14.126020    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:02:14.164596    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:02:14.164610    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:02:14.206437    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:02:14.206450    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:02:14.218241    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:02:14.218255    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:02:14.239500    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:02:14.239513    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:02:14.253014    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:02:14.253027    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:02:14.264897    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:02:14.264912    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:02:14.280012    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:02:14.280023    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:02:14.304951    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:02:14.304963    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:02:14.309020    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:02:14.309026    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:02:14.343260    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:02:14.343274    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:02:14.359899    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:02:14.359910    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:02:14.374233    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:02:14.374245    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:02:14.385553    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:02:14.385565    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:02:14.399220    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:02:14.399232    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:02:14.413589    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:02:14.413602    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:02:14.424485    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:02:14.424497    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:02:16.936591    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:21.938803    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:21.938965    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:02:21.951784    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:02:21.951842    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:02:21.963205    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:02:21.963276    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:02:21.978457    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:02:21.978527    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:02:21.988950    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:02:21.989025    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:02:22.000751    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:02:22.000817    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:02:22.016991    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:02:22.017076    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:02:22.027222    4225 logs.go:276] 0 containers: []
	W0719 12:02:22.027233    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:02:22.027287    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:02:22.038427    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:02:22.038447    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:02:22.038454    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:02:22.049859    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:02:22.049872    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:02:22.061891    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:02:22.061908    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:02:22.096542    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:02:22.096553    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:02:22.113731    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:02:22.113764    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:02:22.128218    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:02:22.128230    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:02:22.140054    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:02:22.140064    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:02:22.152200    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:02:22.152211    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:02:22.189971    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:02:22.189979    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:02:22.202403    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:02:22.202416    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:02:22.220429    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:02:22.220444    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:02:22.244751    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:02:22.244759    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:02:22.258050    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:02:22.258061    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:02:22.269405    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:02:22.269417    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:02:22.280802    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:02:22.280814    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:02:22.285310    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:02:22.285317    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:02:22.323135    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:02:22.323150    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:02:24.839881    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:29.842420    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:29.842785    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:02:29.883433    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:02:29.883581    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:02:29.906139    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:02:29.906248    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:02:29.920941    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:02:29.921016    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:02:29.933548    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:02:29.933621    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:02:29.944463    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:02:29.944534    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:02:29.955789    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:02:29.955856    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:02:29.971501    4225 logs.go:276] 0 containers: []
	W0719 12:02:29.971517    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:02:29.971575    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:02:29.982146    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:02:29.982165    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:02:29.982170    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:02:29.999409    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:02:29.999419    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:02:30.016914    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:02:30.016926    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:02:30.041742    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:02:30.041752    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:02:30.081610    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:02:30.081625    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:02:30.100864    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:02:30.100873    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:02:30.111961    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:02:30.111975    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:02:30.123927    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:02:30.123937    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:02:30.138092    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:02:30.138102    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:02:30.155683    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:02:30.155695    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:02:30.167776    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:02:30.167790    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:02:30.182337    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:02:30.182348    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:02:30.225020    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:02:30.225032    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:02:30.237057    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:02:30.237068    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:02:30.248446    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:02:30.248459    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:02:30.252912    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:02:30.252920    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:02:30.289051    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:02:30.289065    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:02:32.803083    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:37.805771    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:37.806221    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:02:37.846664    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:02:37.846833    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:02:37.872808    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:02:37.872898    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:02:37.887259    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:02:37.887334    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:02:37.899419    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:02:37.899486    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:02:37.910089    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:02:37.910152    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:02:37.920840    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:02:37.920912    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:02:37.931119    4225 logs.go:276] 0 containers: []
	W0719 12:02:37.931129    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:02:37.931190    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:02:37.941556    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:02:37.941574    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:02:37.941580    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:02:37.953623    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:02:37.953636    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:02:37.968379    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:02:37.968395    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:02:37.980463    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:02:37.980476    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:02:37.998348    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:02:37.998362    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:02:38.032348    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:02:38.032360    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:02:38.076139    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:02:38.076156    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:02:38.092633    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:02:38.092646    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:02:38.132056    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:02:38.132064    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:02:38.143251    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:02:38.143265    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:02:38.154821    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:02:38.154834    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:02:38.169444    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:02:38.169453    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:02:38.193552    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:02:38.193561    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:02:38.207891    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:02:38.207899    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:02:38.221897    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:02:38.221910    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:02:38.239232    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:02:38.239245    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:02:38.251039    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:02:38.251052    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:02:40.757401    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:45.759715    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:45.759865    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:02:45.772794    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:02:45.772880    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:02:45.783582    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:02:45.783650    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:02:45.794101    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:02:45.794165    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:02:45.814421    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:02:45.814497    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:02:45.824396    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:02:45.824465    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:02:45.834854    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:02:45.834918    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:02:45.845066    4225 logs.go:276] 0 containers: []
	W0719 12:02:45.845081    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:02:45.845142    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:02:45.856561    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:02:45.856578    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:02:45.856583    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:02:45.868981    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:02:45.868993    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:02:45.907111    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:02:45.907118    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:02:45.942282    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:02:45.942294    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:02:45.956611    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:02:45.956621    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:02:45.968781    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:02:45.968793    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:02:45.980767    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:02:45.980777    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:02:45.998509    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:02:45.998520    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:02:46.022509    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:02:46.022516    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:02:46.038152    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:02:46.038163    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:02:46.052524    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:02:46.052536    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:02:46.065147    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:02:46.065158    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:02:46.076632    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:02:46.076641    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:02:46.081186    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:02:46.081192    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:02:46.101281    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:02:46.101291    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:02:46.118267    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:02:46.118278    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:02:46.155042    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:02:46.155051    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:02:48.668741    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:02:53.669466    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:02:53.669625    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:02:53.683694    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:02:53.683771    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:02:53.695587    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:02:53.695656    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:02:53.706128    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:02:53.706189    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:02:53.716846    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:02:53.716915    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:02:53.728668    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:02:53.728732    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:02:53.739180    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:02:53.739249    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:02:53.749884    4225 logs.go:276] 0 containers: []
	W0719 12:02:53.749899    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:02:53.749952    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:02:53.760262    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:02:53.760279    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:02:53.760285    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:02:53.797158    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:02:53.797167    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:02:53.817932    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:02:53.817944    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:02:53.829301    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:02:53.829311    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:02:53.841559    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:02:53.841570    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:02:53.856573    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:02:53.856584    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:02:53.894631    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:02:53.894642    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:02:53.909066    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:02:53.909076    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:02:53.931459    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:02:53.931467    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:02:53.943637    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:02:53.943648    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:02:53.948535    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:02:53.948546    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:02:53.960756    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:02:53.960767    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:02:53.978588    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:02:53.978599    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:02:53.995620    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:02:53.995631    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:02:54.007169    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:02:54.007179    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:02:54.041952    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:02:54.041962    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:02:54.056316    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:02:54.056329    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:02:56.573351    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:01.575594    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:01.575813    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:01.594695    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:03:01.594760    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:01.605413    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:03:01.605484    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:01.615682    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:03:01.615754    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:01.626583    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:03:01.626657    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:01.636911    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:03:01.636981    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:01.648616    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:03:01.648687    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:01.659540    4225 logs.go:276] 0 containers: []
	W0719 12:03:01.659551    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:01.659609    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:01.670212    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:03:01.670229    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:03:01.670235    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:03:01.681699    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:03:01.681711    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:03:01.693500    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:03:01.693510    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:03:01.710836    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:03:01.710847    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:03:01.725559    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:01.725569    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:01.749013    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:03:01.749023    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:03:01.763649    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:03:01.763662    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:03:01.775273    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:03:01.775289    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:01.787479    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:01.787489    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:01.792246    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:01.792255    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:01.827448    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:03:01.827459    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:03:01.841629    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:03:01.841642    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:03:01.854000    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:03:01.854011    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:03:01.873279    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:01.873289    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:01.911039    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:03:01.911047    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:03:01.948462    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:03:01.948473    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:03:01.962995    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:03:01.963008    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:03:04.477068    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:09.479254    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:09.479494    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:09.501659    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:03:09.501771    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:09.519587    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:03:09.519655    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:09.531883    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:03:09.531949    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:09.542537    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:03:09.542606    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:09.552964    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:03:09.553024    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:09.567584    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:03:09.567647    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:09.581775    4225 logs.go:276] 0 containers: []
	W0719 12:03:09.581786    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:09.581835    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:09.591998    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:03:09.592018    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:03:09.592023    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:03:09.609246    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:03:09.609256    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:03:09.621266    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:03:09.621282    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:09.635424    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:09.635436    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:09.672048    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:09.672056    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:09.708596    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:03:09.708606    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:03:09.723434    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:03:09.723447    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:03:09.735136    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:03:09.735148    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:03:09.753936    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:03:09.753950    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:03:09.767152    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:03:09.767164    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:03:09.782123    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:03:09.782137    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:03:09.794112    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:09.794123    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:09.816486    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:09.816494    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:09.820757    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:03:09.820766    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:03:09.857792    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:03:09.857806    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:03:09.869765    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:03:09.869779    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:03:09.881249    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:03:09.881259    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:03:12.395032    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:17.397370    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:17.397593    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:17.424291    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:03:17.424393    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:17.444095    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:03:17.444162    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:17.458777    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:03:17.458845    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:17.469764    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:03:17.469832    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:17.479518    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:03:17.479588    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:17.490626    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:03:17.490689    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:17.500701    4225 logs.go:276] 0 containers: []
	W0719 12:03:17.500715    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:17.500766    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:17.512963    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:03:17.512980    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:03:17.512988    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:03:17.527442    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:17.527454    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:17.551247    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:03:17.551258    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:17.563702    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:03:17.563714    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:03:17.578167    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:03:17.578177    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:03:17.590423    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:03:17.590434    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:03:17.602546    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:03:17.602557    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:03:17.616620    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:03:17.616632    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:03:17.628395    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:03:17.628406    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:03:17.646394    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:17.646404    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:17.685605    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:17.685618    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:17.689820    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:17.689826    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:17.724267    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:03:17.724279    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:03:17.737946    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:03:17.737956    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:03:17.750269    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:03:17.750280    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:03:17.761839    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:03:17.761853    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:03:17.799288    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:03:17.799298    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:03:20.313228    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:25.315611    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:25.315908    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:25.350683    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:03:25.350843    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:25.369740    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:03:25.369851    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:25.388465    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:03:25.388527    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:25.400473    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:03:25.400541    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:25.410978    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:03:25.411043    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:25.422184    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:03:25.422245    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:25.433716    4225 logs.go:276] 0 containers: []
	W0719 12:03:25.433729    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:25.433786    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:25.449711    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:03:25.449730    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:03:25.449735    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:03:25.464737    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:03:25.464750    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:03:25.476284    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:03:25.476294    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:03:25.488413    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:25.488425    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:25.525969    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:03:25.525981    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:03:25.540083    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:03:25.540093    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:03:25.558724    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:25.558734    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:25.582378    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:03:25.582385    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:03:25.594092    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:03:25.594101    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:25.606155    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:25.606171    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:25.644966    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:25.644978    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:25.649431    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:03:25.649437    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:03:25.663782    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:03:25.663793    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:03:25.703211    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:03:25.703221    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:03:25.718761    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:03:25.718771    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:03:25.731032    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:03:25.731043    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:03:25.742672    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:03:25.742683    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:03:28.256234    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:33.258594    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:33.258757    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:33.275130    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:03:33.275218    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:33.289695    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:03:33.289770    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:33.299995    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:03:33.300060    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:33.310543    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:03:33.310608    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:33.321243    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:03:33.321309    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:33.332599    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:03:33.332678    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:33.343374    4225 logs.go:276] 0 containers: []
	W0719 12:03:33.343387    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:33.343439    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:33.354561    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:03:33.354579    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:03:33.354584    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:03:33.366227    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:33.366239    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:33.404375    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:03:33.404384    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:03:33.444426    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:03:33.444442    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:03:33.458974    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:03:33.458991    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:03:33.470439    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:33.470455    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:33.506337    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:03:33.506352    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:03:33.520407    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:33.520422    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:33.524685    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:03:33.524693    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:03:33.536676    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:03:33.536692    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:03:33.551842    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:03:33.551851    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:03:33.564219    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:33.564234    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:33.586104    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:03:33.586112    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:33.598491    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:03:33.598508    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:03:33.612985    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:03:33.613000    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:03:33.627215    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:03:33.627227    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:03:33.638505    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:03:33.638516    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:03:36.158392    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:41.160756    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:41.161032    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:41.186563    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:03:41.186677    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:41.203550    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:03:41.203633    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:41.217587    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:03:41.217658    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:41.228513    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:03:41.228585    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:41.239090    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:03:41.239149    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:41.249726    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:03:41.249787    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:41.260012    4225 logs.go:276] 0 containers: []
	W0719 12:03:41.260025    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:41.260084    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:41.270035    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:03:41.270056    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:41.270062    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:41.307321    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:03:41.307335    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:03:41.321661    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:03:41.321672    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:03:41.337575    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:03:41.337586    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:03:41.349354    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:41.349367    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:41.353454    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:03:41.353463    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:03:41.367211    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:03:41.367223    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:03:41.378793    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:03:41.378806    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:03:41.390471    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:03:41.390483    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:03:41.404713    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:03:41.404723    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:03:41.416154    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:03:41.416167    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:41.427993    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:03:41.428004    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:03:41.446181    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:41.446192    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:41.468188    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:41.468195    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:41.504924    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:03:41.504933    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:03:41.543364    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:03:41.543375    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:03:41.555711    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:03:41.555723    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:03:44.072591    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:49.074893    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:49.075160    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:03:49.106721    4225 logs.go:276] 2 containers: [7e7f3cd7da22 8db569ae2b3e]
	I0719 12:03:49.106843    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:03:49.125000    4225 logs.go:276] 2 containers: [ae4d64b6ee1a 88b7f06c953c]
	I0719 12:03:49.125095    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:03:49.139821    4225 logs.go:276] 1 containers: [c5af559d093e]
	I0719 12:03:49.139886    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:03:49.151641    4225 logs.go:276] 2 containers: [d2300cc461de 02a941fd8e55]
	I0719 12:03:49.151713    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:03:49.162534    4225 logs.go:276] 1 containers: [7d937d6e712d]
	I0719 12:03:49.162611    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:03:49.172838    4225 logs.go:276] 2 containers: [2f8140a6e07a f46177018be0]
	I0719 12:03:49.172912    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:03:49.184957    4225 logs.go:276] 0 containers: []
	W0719 12:03:49.184967    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:03:49.185022    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:03:49.195681    4225 logs.go:276] 2 containers: [4c08bab5a558 cc71440c5276]
	I0719 12:03:49.195700    4225 logs.go:123] Gathering logs for etcd [ae4d64b6ee1a] ...
	I0719 12:03:49.195705    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae4d64b6ee1a"
	I0719 12:03:49.212306    4225 logs.go:123] Gathering logs for etcd [88b7f06c953c] ...
	I0719 12:03:49.212317    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b7f06c953c"
	I0719 12:03:49.227656    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:03:49.227666    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:03:49.232357    4225 logs.go:123] Gathering logs for kube-apiserver [8db569ae2b3e] ...
	I0719 12:03:49.232365    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db569ae2b3e"
	I0719 12:03:49.270019    4225 logs.go:123] Gathering logs for kube-proxy [7d937d6e712d] ...
	I0719 12:03:49.270029    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d937d6e712d"
	I0719 12:03:49.282373    4225 logs.go:123] Gathering logs for storage-provisioner [cc71440c5276] ...
	I0719 12:03:49.282388    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc71440c5276"
	I0719 12:03:49.293873    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:03:49.293884    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:03:49.315566    4225 logs.go:123] Gathering logs for kube-scheduler [02a941fd8e55] ...
	I0719 12:03:49.315572    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02a941fd8e55"
	I0719 12:03:49.333321    4225 logs.go:123] Gathering logs for kube-controller-manager [f46177018be0] ...
	I0719 12:03:49.333331    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f46177018be0"
	I0719 12:03:49.347130    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:03:49.347141    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:03:49.384903    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:03:49.384912    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:03:49.427678    4225 logs.go:123] Gathering logs for kube-apiserver [7e7f3cd7da22] ...
	I0719 12:03:49.427692    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e7f3cd7da22"
	I0719 12:03:49.442625    4225 logs.go:123] Gathering logs for coredns [c5af559d093e] ...
	I0719 12:03:49.442635    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5af559d093e"
	I0719 12:03:49.458800    4225 logs.go:123] Gathering logs for kube-scheduler [d2300cc461de] ...
	I0719 12:03:49.458811    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2300cc461de"
	I0719 12:03:49.471612    4225 logs.go:123] Gathering logs for kube-controller-manager [2f8140a6e07a] ...
	I0719 12:03:49.471620    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f8140a6e07a"
	I0719 12:03:49.488747    4225 logs.go:123] Gathering logs for storage-provisioner [4c08bab5a558] ...
	I0719 12:03:49.488763    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c08bab5a558"
	I0719 12:03:49.500245    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:03:49.500256    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:03:52.015345    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:03:57.017632    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:03:57.017699    4225 kubeadm.go:597] duration metric: took 4m3.907245916s to restartPrimaryControlPlane
	W0719 12:03:57.017759    4225 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 12:03:57.017788    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0719 12:03:58.033917    4225 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.016131167s)
	I0719 12:03:58.033981    4225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 12:03:58.039042    4225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 12:03:58.042286    4225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 12:03:58.045285    4225 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 12:03:58.045292    4225 kubeadm.go:157] found existing configuration files:
	
	I0719 12:03:58.045314    4225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf
	I0719 12:03:58.047861    4225 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 12:03:58.047887    4225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 12:03:58.050955    4225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf
	I0719 12:03:58.054003    4225 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 12:03:58.054039    4225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 12:03:58.056614    4225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf
	I0719 12:03:58.059168    4225 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 12:03:58.059190    4225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 12:03:58.062180    4225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf
	I0719 12:03:58.065034    4225 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 12:03:58.065060    4225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 12:03:58.067564    4225 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 12:03:58.131818    4225 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 12:04:04.950482    4225 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0719 12:04:04.950512    4225 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 12:04:04.950556    4225 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 12:04:04.950610    4225 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 12:04:04.950669    4225 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 12:04:04.950715    4225 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 12:04:04.953992    4225 out.go:204]   - Generating certificates and keys ...
	I0719 12:04:04.954029    4225 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 12:04:04.954066    4225 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 12:04:04.954104    4225 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 12:04:04.954137    4225 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 12:04:04.954184    4225 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 12:04:04.954217    4225 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 12:04:04.954254    4225 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 12:04:04.954291    4225 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 12:04:04.954334    4225 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 12:04:04.954382    4225 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 12:04:04.954407    4225 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 12:04:04.954438    4225 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 12:04:04.954476    4225 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 12:04:04.954506    4225 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 12:04:04.954540    4225 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 12:04:04.954579    4225 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 12:04:04.954635    4225 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 12:04:04.954679    4225 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 12:04:04.954700    4225 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 12:04:04.954745    4225 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 12:04:04.965021    4225 out.go:204]   - Booting up control plane ...
	I0719 12:04:04.965058    4225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 12:04:04.965111    4225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 12:04:04.965150    4225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 12:04:04.965192    4225 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 12:04:04.965276    4225 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 12:04:04.965317    4225 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503086 seconds
	I0719 12:04:04.965391    4225 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 12:04:04.965455    4225 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 12:04:04.965491    4225 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 12:04:04.965596    4225 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-275000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 12:04:04.965627    4225 kubeadm.go:310] [bootstrap-token] Using token: g8q9zb.vvtlr4dftj1by9c6
	I0719 12:04:04.969008    4225 out.go:204]   - Configuring RBAC rules ...
	I0719 12:04:04.969056    4225 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 12:04:04.969108    4225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 12:04:04.969187    4225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 12:04:04.969253    4225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 12:04:04.969319    4225 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 12:04:04.969375    4225 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 12:04:04.969449    4225 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 12:04:04.969472    4225 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 12:04:04.969503    4225 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 12:04:04.969505    4225 kubeadm.go:310] 
	I0719 12:04:04.969548    4225 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 12:04:04.969552    4225 kubeadm.go:310] 
	I0719 12:04:04.969591    4225 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 12:04:04.969593    4225 kubeadm.go:310] 
	I0719 12:04:04.969605    4225 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 12:04:04.969636    4225 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 12:04:04.969669    4225 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 12:04:04.969673    4225 kubeadm.go:310] 
	I0719 12:04:04.969698    4225 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 12:04:04.969702    4225 kubeadm.go:310] 
	I0719 12:04:04.969731    4225 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 12:04:04.969734    4225 kubeadm.go:310] 
	I0719 12:04:04.969773    4225 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 12:04:04.969822    4225 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 12:04:04.969863    4225 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 12:04:04.969868    4225 kubeadm.go:310] 
	I0719 12:04:04.969926    4225 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 12:04:04.969978    4225 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 12:04:04.969981    4225 kubeadm.go:310] 
	I0719 12:04:04.970027    4225 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g8q9zb.vvtlr4dftj1by9c6 \
	I0719 12:04:04.970088    4225 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:154bfbfa8204c5f233c55b5b534249e4595df04d661a5e7d0c1a65adbcc691d1 \
	I0719 12:04:04.970102    4225 kubeadm.go:310] 	--control-plane 
	I0719 12:04:04.970105    4225 kubeadm.go:310] 
	I0719 12:04:04.970156    4225 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 12:04:04.970162    4225 kubeadm.go:310] 
	I0719 12:04:04.970200    4225 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g8q9zb.vvtlr4dftj1by9c6 \
	I0719 12:04:04.970347    4225 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:154bfbfa8204c5f233c55b5b534249e4595df04d661a5e7d0c1a65adbcc691d1 
	I0719 12:04:04.970359    4225 cni.go:84] Creating CNI manager for ""
	I0719 12:04:04.970368    4225 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 12:04:04.979923    4225 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 12:04:04.983058    4225 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 12:04:04.986262    4225 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 12:04:04.990982    4225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 12:04:04.991020    4225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 12:04:04.991040    4225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-275000 minikube.k8s.io/updated_at=2024_07_19T12_04_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ffd2625ecdd21666acefb1ad4fc0b175f94ab221 minikube.k8s.io/name=stopped-upgrade-275000 minikube.k8s.io/primary=true
	I0719 12:04:05.031369    4225 kubeadm.go:1113] duration metric: took 40.380834ms to wait for elevateKubeSystemPrivileges
	I0719 12:04:05.031384    4225 ops.go:34] apiserver oom_adj: -16
	I0719 12:04:05.031389    4225 kubeadm.go:394] duration metric: took 4m11.93582875s to StartCluster
	I0719 12:04:05.031397    4225 settings.go:142] acquiring lock: {Name:mk67411000c671a58f92dc65eb422ba28279f174 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:04:05.031484    4225 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 12:04:05.031901    4225 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/kubeconfig: {Name:mk4dabaac160a2c10ee03f7aa88bffdd6270bcb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:04:05.032101    4225 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:04:05.032206    4225 config.go:182] Loaded profile config "stopped-upgrade-275000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 12:04:05.032160    4225 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 12:04:05.032229    4225 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-275000"
	I0719 12:04:05.032239    4225 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-275000"
	I0719 12:04:05.032243    4225 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-275000"
	W0719 12:04:05.032246    4225 addons.go:243] addon storage-provisioner should already be in state true
	I0719 12:04:05.032250    4225 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-275000"
	I0719 12:04:05.032258    4225 host.go:66] Checking if "stopped-upgrade-275000" exists ...
	I0719 12:04:05.035950    4225 out.go:177] * Verifying Kubernetes components...
	I0719 12:04:05.036581    4225 kapi.go:59] client config for stopped-upgrade-275000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/stopped-upgrade-275000/client.key", CAFile:"/Users/jenkins/minikube-integration/19307-1066/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101a87790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 12:04:05.040351    4225 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-275000"
	W0719 12:04:05.040356    4225 addons.go:243] addon default-storageclass should already be in state true
	I0719 12:04:05.040362    4225 host.go:66] Checking if "stopped-upgrade-275000" exists ...
	I0719 12:04:05.040873    4225 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 12:04:05.040878    4225 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 12:04:05.040890    4225 sshutil.go:53] new ssh client: &{IP:localhost Port:50502 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/stopped-upgrade-275000/id_rsa Username:docker}
	I0719 12:04:05.043984    4225 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 12:04:05.048212    4225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 12:04:05.052044    4225 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 12:04:05.052051    4225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 12:04:05.052057    4225 sshutil.go:53] new ssh client: &{IP:localhost Port:50502 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/stopped-upgrade-275000/id_rsa Username:docker}
	I0719 12:04:05.118775    4225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 12:04:05.124677    4225 api_server.go:52] waiting for apiserver process to appear ...
	I0719 12:04:05.124722    4225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 12:04:05.128620    4225 api_server.go:72] duration metric: took 96.510083ms to wait for apiserver process to appear ...
	I0719 12:04:05.128627    4225 api_server.go:88] waiting for apiserver healthz status ...
	I0719 12:04:05.128634    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:05.159400    4225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 12:04:05.185279    4225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 12:04:10.130725    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:10.130787    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:15.131006    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:15.131037    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:20.131262    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:20.131291    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:25.131638    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:25.131682    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:30.132209    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:30.132235    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:35.132853    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:35.132887    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0719 12:04:35.504308    4225 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0719 12:04:35.508606    4225 out.go:177] * Enabled addons: storage-provisioner
	I0719 12:04:35.517540    4225 addons.go:510] duration metric: took 30.485812541s for enable addons: enabled=[storage-provisioner]
	I0719 12:04:40.134097    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:40.134137    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:45.134337    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:45.134362    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:50.134722    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:50.134796    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:04:55.136118    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:04:55.136143    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:00.137835    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:00.137859    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:05.138348    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:05.138758    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:05.159786    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:05:05.159867    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:05.176585    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:05:05.176650    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:05.187648    4225 logs.go:276] 2 containers: [2de5f4a7e171 62c9cee5b160]
	I0719 12:05:05.187722    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:05.199368    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:05:05.199437    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:05.210160    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:05:05.210229    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:05.221554    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:05:05.221621    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:05.234123    4225 logs.go:276] 0 containers: []
	W0719 12:05:05.234133    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:05.234193    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:05.244832    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:05:05.244849    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:05.244854    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:05.249419    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:05.249426    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:05.284276    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:05:05.284290    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:05:05.298669    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:05:05.298683    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:05:05.310988    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:05:05.310999    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:05:05.327066    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:05:05.327079    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:05.339410    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:05.339421    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:05.376860    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:05:05.376876    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:05:05.393172    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:05:05.393183    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:05:05.404674    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:05:05.404686    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:05:05.417705    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:05:05.417717    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:05:05.435612    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:05:05.435622    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:05:05.447993    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:05.448002    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:07.974532    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:12.977109    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:12.977278    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:12.995929    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:05:12.996023    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:13.009052    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:05:13.009129    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:13.020149    4225 logs.go:276] 2 containers: [2de5f4a7e171 62c9cee5b160]
	I0719 12:05:13.020218    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:13.030994    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:05:13.031058    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:13.041674    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:05:13.041738    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:13.052377    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:05:13.052443    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:13.063049    4225 logs.go:276] 0 containers: []
	W0719 12:05:13.063060    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:13.063117    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:13.073511    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:05:13.073524    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:05:13.073530    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:13.084674    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:05:13.084689    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:05:13.098837    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:05:13.098847    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:05:13.109827    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:05:13.109841    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:05:13.125416    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:05:13.125428    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:05:13.137331    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:05:13.137341    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:05:13.158611    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:05:13.158622    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:05:13.169842    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:13.169856    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:13.194693    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:13.194704    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:13.233418    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:13.233435    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:13.237920    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:13.237928    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:13.273271    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:05:13.273286    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:05:13.288285    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:05:13.288295    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:05:15.801988    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:20.803450    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:20.803704    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:20.829663    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:05:20.829763    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:20.848257    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:05:20.848335    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:20.862198    4225 logs.go:276] 2 containers: [2de5f4a7e171 62c9cee5b160]
	I0719 12:05:20.862263    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:20.873423    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:05:20.873489    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:20.884018    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:05:20.884077    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:20.895141    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:05:20.895207    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:20.910669    4225 logs.go:276] 0 containers: []
	W0719 12:05:20.910682    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:20.910735    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:20.921558    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:05:20.921571    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:05:20.921577    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:05:20.935967    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:05:20.935977    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:05:20.947830    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:05:20.947841    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:05:20.959883    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:05:20.959894    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:05:20.971252    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:05:20.971288    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:20.983211    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:20.983222    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:21.019177    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:21.019188    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:21.024161    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:05:21.024168    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:05:21.042869    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:05:21.042884    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:05:21.054646    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:05:21.054655    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:05:21.070240    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:05:21.070250    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:05:21.091840    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:21.091851    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:21.117563    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:21.117574    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:23.658548    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:28.660787    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:28.660936    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:28.672624    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:05:28.672700    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:28.683561    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:05:28.683631    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:28.694045    4225 logs.go:276] 2 containers: [2de5f4a7e171 62c9cee5b160]
	I0719 12:05:28.694109    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:28.704223    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:05:28.704286    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:28.715747    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:05:28.715810    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:28.726231    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:05:28.726297    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:28.736184    4225 logs.go:276] 0 containers: []
	W0719 12:05:28.736196    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:28.736248    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:28.746646    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:05:28.746663    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:05:28.746669    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:05:28.760609    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:05:28.760620    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:05:28.774872    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:05:28.774884    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:05:28.786320    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:05:28.786330    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:05:28.797957    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:05:28.797967    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:05:28.815542    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:05:28.815551    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:05:28.827239    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:28.827252    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:28.866879    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:28.866898    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:28.871585    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:05:28.871593    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:28.882592    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:05:28.882602    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:05:28.898507    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:28.898517    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:28.922476    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:28.922485    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:28.961228    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:05:28.961239    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:05:31.477037    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:36.479274    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:36.479386    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:36.490786    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:05:36.490863    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:36.501874    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:05:36.501942    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:36.512585    4225 logs.go:276] 2 containers: [2de5f4a7e171 62c9cee5b160]
	I0719 12:05:36.512650    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:36.523248    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:05:36.523317    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:36.533971    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:05:36.534046    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:36.544909    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:05:36.544970    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:36.556192    4225 logs.go:276] 0 containers: []
	W0719 12:05:36.556203    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:36.556257    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:36.567598    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:05:36.567611    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:05:36.567617    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:05:36.581377    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:05:36.581388    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:05:36.601550    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:05:36.601560    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:05:36.613711    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:05:36.613723    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:05:36.633617    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:05:36.633627    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:05:36.645725    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:36.645736    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:36.670648    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:36.670658    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:36.708263    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:36.708275    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:36.713014    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:05:36.713022    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:36.725226    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:05:36.725238    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:05:36.737162    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:05:36.737173    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:05:36.748937    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:36.748950    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:36.783093    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:05:36.783107    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:05:39.299472    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:44.301803    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:44.302182    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:44.331879    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:05:44.332008    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:44.349822    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:05:44.349904    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:44.363821    4225 logs.go:276] 2 containers: [2de5f4a7e171 62c9cee5b160]
	I0719 12:05:44.363921    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:44.375144    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:05:44.375216    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:44.386600    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:05:44.386664    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:44.402359    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:05:44.402427    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:44.412592    4225 logs.go:276] 0 containers: []
	W0719 12:05:44.412604    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:44.412665    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:44.423539    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:05:44.423554    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:05:44.423559    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:05:44.434992    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:05:44.435001    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:05:44.452158    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:05:44.452169    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:05:44.463894    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:05:44.463906    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:05:44.475827    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:05:44.475839    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:44.487654    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:44.487666    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:44.523696    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:44.523704    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:44.528291    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:05:44.528301    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:05:44.539996    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:05:44.540007    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:05:44.557944    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:44.557954    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:44.581293    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:44.581302    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:44.616060    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:05:44.616072    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:05:44.633734    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:05:44.633743    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:05:47.149504    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:52.151729    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:52.151918    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:05:52.169064    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:05:52.169149    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:05:52.181699    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:05:52.181766    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:05:52.192870    4225 logs.go:276] 2 containers: [2de5f4a7e171 62c9cee5b160]
	I0719 12:05:52.192933    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:05:52.203472    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:05:52.203539    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:05:52.222085    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:05:52.222156    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:05:52.232596    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:05:52.232665    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:05:52.243222    4225 logs.go:276] 0 containers: []
	W0719 12:05:52.243232    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:05:52.243287    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:05:52.254975    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:05:52.254991    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:05:52.254997    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:05:52.292096    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:05:52.292105    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:05:52.307230    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:05:52.307240    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:05:52.324609    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:05:52.324619    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:05:52.348550    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:05:52.348557    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:05:52.359924    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:05:52.359934    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:05:52.364733    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:05:52.364739    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:05:52.400458    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:05:52.400472    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:05:52.414916    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:05:52.414928    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:05:52.430264    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:05:52.430274    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:05:52.442574    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:05:52.442586    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:05:52.455036    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:05:52.455046    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:05:52.468295    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:05:52.468306    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:05:54.982082    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:05:59.984364    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:05:59.984609    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:06:00.006396    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:06:00.006482    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:06:00.021238    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:06:00.021307    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:06:00.033698    4225 logs.go:276] 2 containers: [2de5f4a7e171 62c9cee5b160]
	I0719 12:06:00.033775    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:06:00.045832    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:06:00.045902    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:06:00.056454    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:06:00.056523    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:06:00.067570    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:06:00.067641    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:06:00.078718    4225 logs.go:276] 0 containers: []
	W0719 12:06:00.078729    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:06:00.078783    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:06:00.088936    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:06:00.088948    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:06:00.088953    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:06:00.103599    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:06:00.103610    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:06:00.115517    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:06:00.115529    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:06:00.132800    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:06:00.132810    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:06:00.144961    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:06:00.144970    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:06:00.170331    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:06:00.170347    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:06:00.208197    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:06:00.208206    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:06:00.222765    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:06:00.222781    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:06:00.234700    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:06:00.234711    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:06:00.249779    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:06:00.249789    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:06:00.262774    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:06:00.262795    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:06:00.274373    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:06:00.274383    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:06:00.278582    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:06:00.278591    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:06:02.818658    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:06:07.820961    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:06:07.821137    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:06:07.837000    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:06:07.837072    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:06:07.861185    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:06:07.861260    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:06:07.871817    4225 logs.go:276] 4 containers: [7825519f461e c0cc709dc995 2de5f4a7e171 62c9cee5b160]
	I0719 12:06:07.871895    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:06:07.882287    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:06:07.882356    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:06:07.892854    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:06:07.892917    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:06:07.903082    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:06:07.903149    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:06:07.914397    4225 logs.go:276] 0 containers: []
	W0719 12:06:07.914406    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:06:07.914456    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:06:07.925151    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:06:07.925168    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:06:07.925175    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:06:07.940522    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:06:07.940537    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:06:07.954331    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:06:07.954341    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:06:07.971197    4225 logs.go:123] Gathering logs for coredns [7825519f461e] ...
	I0719 12:06:07.971207    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7825519f461e"
	I0719 12:06:07.983189    4225 logs.go:123] Gathering logs for coredns [c0cc709dc995] ...
	I0719 12:06:07.983202    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cc709dc995"
	I0719 12:06:07.994853    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:06:07.994865    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:06:08.009094    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:06:08.009105    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:06:08.020892    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:06:08.020906    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:06:08.035370    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:06:08.035382    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:06:08.040232    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:06:08.040239    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:06:08.055464    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:06:08.055475    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:06:08.080411    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:06:08.080418    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:06:08.115647    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:06:08.115659    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:06:08.127371    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:06:08.127382    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:06:08.164661    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:06:08.164671    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:06:10.678031    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:06:15.680210    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:06:15.680417    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:06:15.708486    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:06:15.708589    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:06:15.725400    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:06:15.725478    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:06:15.738391    4225 logs.go:276] 4 containers: [7825519f461e c0cc709dc995 2de5f4a7e171 62c9cee5b160]
	I0719 12:06:15.738474    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:06:15.749212    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:06:15.749278    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:06:15.759827    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:06:15.759890    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:06:15.770601    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:06:15.770665    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:06:15.782084    4225 logs.go:276] 0 containers: []
	W0719 12:06:15.782095    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:06:15.782155    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:06:15.792370    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:06:15.792387    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:06:15.792392    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:06:15.818188    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:06:15.818199    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:06:15.831670    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:06:15.831681    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:06:15.850178    4225 logs.go:123] Gathering logs for coredns [7825519f461e] ...
	I0719 12:06:15.850192    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7825519f461e"
	I0719 12:06:15.861395    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:06:15.861407    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:06:15.877166    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:06:15.877177    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:06:15.890047    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:06:15.890059    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:06:15.894803    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:06:15.894810    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:06:15.908619    4225 logs.go:123] Gathering logs for coredns [c0cc709dc995] ...
	I0719 12:06:15.908629    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cc709dc995"
	I0719 12:06:15.920197    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:06:15.920211    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:06:15.937395    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:06:15.937414    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:06:15.971826    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:06:15.971837    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:06:15.986369    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:06:15.986383    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:06:15.998029    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:06:15.998043    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:06:16.009312    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:06:16.009324    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:06:18.548010    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:06:23.549116    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:06:23.549328    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:06:23.568386    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:06:23.568478    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:06:23.582442    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:06:23.582515    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:06:23.597995    4225 logs.go:276] 4 containers: [7825519f461e c0cc709dc995 2de5f4a7e171 62c9cee5b160]
	I0719 12:06:23.598070    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:06:23.608368    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:06:23.608434    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:06:23.619232    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:06:23.619295    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:06:23.631490    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:06:23.631559    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:06:23.641758    4225 logs.go:276] 0 containers: []
	W0719 12:06:23.641767    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:06:23.641816    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:06:23.652031    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:06:23.652049    4225 logs.go:123] Gathering logs for coredns [c0cc709dc995] ...
	I0719 12:06:23.652055    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cc709dc995"
	I0719 12:06:23.664551    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:06:23.664562    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:06:23.678580    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:06:23.678592    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:06:23.690347    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:06:23.690357    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:06:23.701965    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:06:23.701975    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:06:23.713706    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:06:23.713716    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:06:23.726855    4225 logs.go:123] Gathering logs for coredns [7825519f461e] ...
	I0719 12:06:23.726867    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7825519f461e"
	I0719 12:06:23.745800    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:06:23.745811    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:06:23.761179    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:06:23.761190    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:06:23.779962    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:06:23.779976    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:06:23.819169    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:06:23.819183    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:06:23.833181    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:06:23.833191    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:06:23.858522    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:06:23.858532    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:06:23.873181    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:06:23.873190    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:06:23.908716    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:06:23.908732    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:06:26.415317    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:06:31.417741    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:06:31.418054    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:06:31.450720    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:06:31.450846    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:06:31.470195    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:06:31.470309    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:06:31.484491    4225 logs.go:276] 4 containers: [7825519f461e c0cc709dc995 2de5f4a7e171 62c9cee5b160]
	I0719 12:06:31.484569    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:06:31.496004    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:06:31.496060    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:06:31.511077    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:06:31.511145    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:06:31.521919    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:06:31.521992    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:06:31.532055    4225 logs.go:276] 0 containers: []
	W0719 12:06:31.532068    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:06:31.532126    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:06:31.543379    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:06:31.543396    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:06:31.543401    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:06:31.581314    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:06:31.581322    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:06:31.593576    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:06:31.593588    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:06:31.630692    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:06:31.630703    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:06:31.649041    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:06:31.649051    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:06:31.673865    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:06:31.673872    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:06:31.688240    4225 logs.go:123] Gathering logs for coredns [7825519f461e] ...
	I0719 12:06:31.688251    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7825519f461e"
	I0719 12:06:31.699776    4225 logs.go:123] Gathering logs for coredns [c0cc709dc995] ...
	I0719 12:06:31.699786    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cc709dc995"
	I0719 12:06:31.711270    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:06:31.711280    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:06:31.734810    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:06:31.734820    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:06:31.754550    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:06:31.754562    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:06:31.759259    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:06:31.759267    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:06:31.778785    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:06:31.778798    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:06:31.790305    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:06:31.790316    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:06:31.811372    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:06:31.811384    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:06:34.325578    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:06:39.326659    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:06:39.326892    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:06:39.352301    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:06:39.352423    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:06:39.370085    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:06:39.370174    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:06:39.383491    4225 logs.go:276] 4 containers: [7825519f461e c0cc709dc995 2de5f4a7e171 62c9cee5b160]
	I0719 12:06:39.383565    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:06:39.395167    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:06:39.395231    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:06:39.405623    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:06:39.405686    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:06:39.415987    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:06:39.416060    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:06:39.426631    4225 logs.go:276] 0 containers: []
	W0719 12:06:39.426642    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:06:39.426698    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:06:39.438333    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:06:39.438349    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:06:39.438354    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:06:39.450321    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:06:39.450331    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:06:39.461749    4225 logs.go:123] Gathering logs for coredns [7825519f461e] ...
	I0719 12:06:39.461762    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7825519f461e"
	I0719 12:06:39.473261    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:06:39.473274    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:06:39.487072    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:06:39.487085    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:06:39.498385    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:06:39.498398    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:06:39.518044    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:06:39.518054    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:06:39.532265    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:06:39.532276    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:06:39.567986    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:06:39.568022    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:06:39.583233    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:06:39.583244    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:06:39.607114    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:06:39.607124    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:06:39.618980    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:06:39.618992    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:06:39.657872    4225 logs.go:123] Gathering logs for coredns [c0cc709dc995] ...
	I0719 12:06:39.657884    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cc709dc995"
	I0719 12:06:39.669376    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:06:39.669387    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:06:39.686592    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:06:39.686601    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:06:42.193389    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:06:47.195883    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:06:47.196335    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:06:47.237423    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:06:47.237539    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:06:47.262759    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:06:47.262859    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:06:47.277812    4225 logs.go:276] 4 containers: [7825519f461e c0cc709dc995 2de5f4a7e171 62c9cee5b160]
	I0719 12:06:47.277878    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:06:47.289846    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:06:47.289913    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:06:47.300785    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:06:47.300853    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:06:47.318050    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:06:47.318108    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:06:47.332694    4225 logs.go:276] 0 containers: []
	W0719 12:06:47.332706    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:06:47.332758    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:06:47.343076    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:06:47.343097    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:06:47.343102    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:06:47.355305    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:06:47.355315    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:06:47.369540    4225 logs.go:123] Gathering logs for coredns [7825519f461e] ...
	I0719 12:06:47.369552    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7825519f461e"
	I0719 12:06:47.381168    4225 logs.go:123] Gathering logs for coredns [c0cc709dc995] ...
	I0719 12:06:47.381177    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cc709dc995"
	I0719 12:06:47.395632    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:06:47.395647    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:06:47.413299    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:06:47.413308    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:06:47.424642    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:06:47.424652    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:06:47.438566    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:06:47.438574    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:06:47.450358    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:06:47.450371    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:06:47.467213    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:06:47.467223    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:06:47.487672    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:06:47.487683    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:06:47.511817    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:06:47.511827    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:06:47.548504    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:06:47.548510    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:06:47.552667    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:06:47.552673    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:06:47.588229    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:06:47.588240    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:06:50.101982    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:06:55.104365    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:06:55.104532    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:06:55.116618    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:06:55.116683    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:06:55.126428    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:06:55.126478    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:06:55.137181    4225 logs.go:276] 4 containers: [7825519f461e c0cc709dc995 2de5f4a7e171 62c9cee5b160]
	I0719 12:06:55.137246    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:06:55.148576    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:06:55.148632    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:06:55.159306    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:06:55.159375    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:06:55.170237    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:06:55.170296    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:06:55.180418    4225 logs.go:276] 0 containers: []
	W0719 12:06:55.180430    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:06:55.180478    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:06:55.194799    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:06:55.194819    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:06:55.194826    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:06:55.199207    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:06:55.199214    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:06:55.237306    4225 logs.go:123] Gathering logs for coredns [7825519f461e] ...
	I0719 12:06:55.237318    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7825519f461e"
	I0719 12:06:55.248848    4225 logs.go:123] Gathering logs for coredns [c0cc709dc995] ...
	I0719 12:06:55.248859    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cc709dc995"
	I0719 12:06:55.263156    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:06:55.263166    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:06:55.274335    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:06:55.274348    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:06:55.291430    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:06:55.291441    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:06:55.315568    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:06:55.315577    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:06:55.331485    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:06:55.331495    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:06:55.343244    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:06:55.343254    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:06:55.357057    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:06:55.357066    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:06:55.369022    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:06:55.369033    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:06:55.380470    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:06:55.380482    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:06:55.391898    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:06:55.391909    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:06:55.430236    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:06:55.430245    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:06:57.946677    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:07:02.949467    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:07:02.949631    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:07:02.971220    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:07:02.971302    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:07:02.988493    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:07:02.988564    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:07:03.002491    4225 logs.go:276] 4 containers: [7825519f461e c0cc709dc995 2de5f4a7e171 62c9cee5b160]
	I0719 12:07:03.002565    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:07:03.014074    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:07:03.014145    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:07:03.024770    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:07:03.024829    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:07:03.035139    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:07:03.035197    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:07:03.045438    4225 logs.go:276] 0 containers: []
	W0719 12:07:03.045448    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:07:03.045494    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:07:03.055981    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:07:03.055998    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:07:03.056003    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:07:03.074411    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:07:03.074423    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:07:03.086372    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:07:03.086386    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:07:03.097979    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:07:03.097991    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:07:03.109824    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:07:03.109836    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:07:03.121064    4225 logs.go:123] Gathering logs for coredns [c0cc709dc995] ...
	I0719 12:07:03.121077    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cc709dc995"
	I0719 12:07:03.132453    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:07:03.132465    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:07:03.151724    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:07:03.151737    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:07:03.165892    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:07:03.165903    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:07:03.183438    4225 logs.go:123] Gathering logs for coredns [7825519f461e] ...
	I0719 12:07:03.183450    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7825519f461e"
	I0719 12:07:03.195302    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:07:03.195313    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:07:03.220609    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:07:03.220616    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:07:03.258448    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:07:03.258457    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:07:03.293221    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:07:03.293232    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:07:03.297348    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:07:03.297356    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:07:05.811309    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:07:10.814636    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:07:10.815014    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:07:10.858547    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:07:10.858681    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:07:10.878164    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:07:10.878258    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:07:10.892792    4225 logs.go:276] 4 containers: [7825519f461e c0cc709dc995 2de5f4a7e171 62c9cee5b160]
	I0719 12:07:10.892866    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:07:10.905342    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:07:10.905409    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:07:10.916176    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:07:10.916238    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:07:10.926708    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:07:10.926774    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:07:10.937350    4225 logs.go:276] 0 containers: []
	W0719 12:07:10.937362    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:07:10.937415    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:07:10.947830    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:07:10.947847    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:07:10.947852    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:07:10.986026    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:07:10.986033    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:07:10.999896    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:07:10.999909    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:07:11.011489    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:07:11.011502    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:07:11.023324    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:07:11.023338    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:07:11.035233    4225 logs.go:123] Gathering logs for coredns [7825519f461e] ...
	I0719 12:07:11.035245    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7825519f461e"
	I0719 12:07:11.047341    4225 logs.go:123] Gathering logs for coredns [c0cc709dc995] ...
	I0719 12:07:11.047351    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cc709dc995"
	I0719 12:07:11.065963    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:07:11.065975    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:07:11.077759    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:07:11.077770    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:07:11.090758    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:07:11.090771    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:07:11.110097    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:07:11.110115    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:07:11.114667    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:07:11.114675    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:07:11.149977    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:07:11.149988    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:07:11.164498    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:07:11.164509    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:07:11.179961    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:07:11.179974    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:07:13.706800    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:07:18.709550    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:07:18.709828    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:07:18.740518    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:07:18.740645    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:07:18.758886    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:07:18.758971    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:07:18.773240    4225 logs.go:276] 4 containers: [7825519f461e c0cc709dc995 2de5f4a7e171 62c9cee5b160]
	I0719 12:07:18.773314    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:07:18.784551    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:07:18.784626    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:07:18.795781    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:07:18.795850    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:07:18.806908    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:07:18.806977    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:07:18.817744    4225 logs.go:276] 0 containers: []
	W0719 12:07:18.817755    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:07:18.817812    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:07:18.827867    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:07:18.827884    4225 logs.go:123] Gathering logs for coredns [7825519f461e] ...
	I0719 12:07:18.827890    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7825519f461e"
	I0719 12:07:18.841814    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:07:18.841824    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:07:18.859019    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:07:18.859028    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:07:18.870742    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:07:18.870755    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:07:18.885475    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:07:18.885487    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:07:18.891801    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:07:18.891813    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:07:18.927692    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:07:18.927704    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:07:18.942196    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:07:18.942207    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:07:18.967868    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:07:18.967875    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:07:18.979673    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:07:18.979684    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:07:19.017921    4225 logs.go:123] Gathering logs for coredns [c0cc709dc995] ...
	I0719 12:07:19.017928    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cc709dc995"
	I0719 12:07:19.029444    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:07:19.029453    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:07:19.040520    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:07:19.040534    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:07:19.052951    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:07:19.052962    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:07:19.068398    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:07:19.068408    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:07:21.582549    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:07:26.585305    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:07:26.585709    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:07:26.629877    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:07:26.630021    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:07:26.649718    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:07:26.649802    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:07:26.664468    4225 logs.go:276] 4 containers: [7825519f461e c0cc709dc995 2de5f4a7e171 62c9cee5b160]
	I0719 12:07:26.664546    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:07:26.677260    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:07:26.677317    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:07:26.688449    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:07:26.688516    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:07:26.699519    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:07:26.699587    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:07:26.709275    4225 logs.go:276] 0 containers: []
	W0719 12:07:26.709287    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:07:26.709339    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:07:26.719946    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:07:26.719964    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:07:26.719969    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:07:26.734534    4225 logs.go:123] Gathering logs for coredns [c0cc709dc995] ...
	I0719 12:07:26.734547    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cc709dc995"
	I0719 12:07:26.746204    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:07:26.746213    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:07:26.757950    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:07:26.757964    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:07:26.783246    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:07:26.783252    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:07:26.794537    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:07:26.794550    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:07:26.832701    4225 logs.go:123] Gathering logs for coredns [7825519f461e] ...
	I0719 12:07:26.832714    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7825519f461e"
	I0719 12:07:26.845284    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:07:26.845302    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:07:26.859350    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:07:26.859362    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:07:26.874427    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:07:26.874438    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:07:26.892467    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:07:26.892477    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:07:26.907401    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:07:26.907411    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:07:26.912078    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:07:26.912084    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:07:26.950051    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:07:26.950062    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:07:26.967711    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:07:26.967721    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:07:29.481690    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:07:34.484491    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:07:34.484794    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:07:34.517442    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:07:34.517583    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:07:34.539419    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:07:34.539514    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:07:34.553613    4225 logs.go:276] 4 containers: [7825519f461e c0cc709dc995 2de5f4a7e171 62c9cee5b160]
	I0719 12:07:34.553683    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:07:34.565390    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:07:34.565450    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:07:34.575744    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:07:34.575808    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:07:34.586259    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:07:34.586317    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:07:34.597967    4225 logs.go:276] 0 containers: []
	W0719 12:07:34.597979    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:07:34.598038    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:07:34.608225    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:07:34.608243    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:07:34.608249    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:07:34.644793    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:07:34.644805    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:07:34.658812    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:07:34.658822    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:07:34.672860    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:07:34.672872    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:07:34.695704    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:07:34.695712    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:07:34.710991    4225 logs.go:123] Gathering logs for coredns [7825519f461e] ...
	I0719 12:07:34.711002    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7825519f461e"
	I0719 12:07:34.722543    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:07:34.722554    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:07:34.738205    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:07:34.738218    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:07:34.750064    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:07:34.750078    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:07:34.761439    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:07:34.761452    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:07:34.765770    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:07:34.765779    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:07:34.800846    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:07:34.800860    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:07:34.812756    4225 logs.go:123] Gathering logs for coredns [c0cc709dc995] ...
	I0719 12:07:34.812769    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cc709dc995"
	I0719 12:07:34.824112    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:07:34.824121    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:07:34.837068    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:07:34.837078    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:07:37.360574    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:07:42.363185    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:07:42.363629    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:07:42.405965    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:07:42.406094    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:07:42.440308    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:07:42.440394    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:07:42.469768    4225 logs.go:276] 4 containers: [7825519f461e c0cc709dc995 2de5f4a7e171 62c9cee5b160]
	I0719 12:07:42.469856    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:07:42.480629    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:07:42.480689    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:07:42.491347    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:07:42.491413    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:07:42.502278    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:07:42.502360    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:07:42.512246    4225 logs.go:276] 0 containers: []
	W0719 12:07:42.512257    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:07:42.512313    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:07:42.523115    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:07:42.523131    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:07:42.523136    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:07:42.561664    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:07:42.561673    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:07:42.579984    4225 logs.go:123] Gathering logs for coredns [7825519f461e] ...
	I0719 12:07:42.579997    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7825519f461e"
	I0719 12:07:42.591864    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:07:42.591876    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:07:42.603311    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:07:42.603324    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:07:42.618880    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:07:42.618891    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:07:42.638871    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:07:42.638885    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:07:42.661838    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:07:42.661847    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:07:42.665947    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:07:42.665955    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:07:42.680652    4225 logs.go:123] Gathering logs for coredns [c0cc709dc995] ...
	I0719 12:07:42.680664    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cc709dc995"
	I0719 12:07:42.692099    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:07:42.692111    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:07:42.703905    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:07:42.703915    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:07:42.715655    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:07:42.715669    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:07:42.734127    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:07:42.734137    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:07:42.745203    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:07:42.745214    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:07:45.282449    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:07:50.284692    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:07:50.284968    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:07:50.315599    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:07:50.315707    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:07:50.334443    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:07:50.334512    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:07:50.346942    4225 logs.go:276] 4 containers: [7825519f461e c0cc709dc995 2de5f4a7e171 62c9cee5b160]
	I0719 12:07:50.347014    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:07:50.357839    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:07:50.357904    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:07:50.367778    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:07:50.367834    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:07:50.378014    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:07:50.378078    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:07:50.388413    4225 logs.go:276] 0 containers: []
	W0719 12:07:50.388425    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:07:50.388474    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:07:50.398994    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:07:50.399010    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:07:50.399014    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:07:50.435935    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:07:50.435943    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:07:50.453201    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:07:50.453212    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:07:50.476099    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:07:50.476108    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:07:50.487610    4225 logs.go:123] Gathering logs for coredns [7825519f461e] ...
	I0719 12:07:50.487621    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7825519f461e"
	I0719 12:07:50.499376    4225 logs.go:123] Gathering logs for coredns [2de5f4a7e171] ...
	I0719 12:07:50.499385    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2de5f4a7e171"
	I0719 12:07:50.510954    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:07:50.510964    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:07:50.530870    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:07:50.530882    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:07:50.543391    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:07:50.543404    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:07:50.581399    4225 logs.go:123] Gathering logs for coredns [c0cc709dc995] ...
	I0719 12:07:50.581409    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cc709dc995"
	I0719 12:07:50.592917    4225 logs.go:123] Gathering logs for coredns [62c9cee5b160] ...
	I0719 12:07:50.592927    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62c9cee5b160"
	I0719 12:07:50.604280    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:07:50.604293    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:07:50.608410    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:07:50.608419    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:07:50.622812    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:07:50.622825    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:07:50.637656    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:07:50.637668    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:07:53.155903    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:07:58.158611    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:07:58.159077    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 12:07:58.197885    4225 logs.go:276] 1 containers: [3f9d05bf1805]
	I0719 12:07:58.198023    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 12:07:58.220745    4225 logs.go:276] 1 containers: [d0342530fc57]
	I0719 12:07:58.220866    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 12:07:58.236369    4225 logs.go:276] 4 containers: [58bb1d8a61b9 ed57f8ee81ca 7825519f461e c0cc709dc995]
	I0719 12:07:58.236440    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 12:07:58.248800    4225 logs.go:276] 1 containers: [72521dc35142]
	I0719 12:07:58.248870    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 12:07:58.260080    4225 logs.go:276] 1 containers: [1ba3c76948fa]
	I0719 12:07:58.260152    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 12:07:58.271041    4225 logs.go:276] 1 containers: [a4bffd71cffc]
	I0719 12:07:58.271107    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 12:07:58.281717    4225 logs.go:276] 0 containers: []
	W0719 12:07:58.281730    4225 logs.go:278] No container was found matching "kindnet"
	I0719 12:07:58.281787    4225 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 12:07:58.292258    4225 logs.go:276] 1 containers: [a79c98a935a6]
	I0719 12:07:58.292275    4225 logs.go:123] Gathering logs for storage-provisioner [a79c98a935a6] ...
	I0719 12:07:58.292280    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a79c98a935a6"
	I0719 12:07:58.304364    4225 logs.go:123] Gathering logs for etcd [d0342530fc57] ...
	I0719 12:07:58.304376    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0342530fc57"
	I0719 12:07:58.318996    4225 logs.go:123] Gathering logs for coredns [ed57f8ee81ca] ...
	I0719 12:07:58.319006    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed57f8ee81ca"
	I0719 12:07:58.330372    4225 logs.go:123] Gathering logs for coredns [7825519f461e] ...
	I0719 12:07:58.330384    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7825519f461e"
	I0719 12:07:58.342129    4225 logs.go:123] Gathering logs for coredns [c0cc709dc995] ...
	I0719 12:07:58.342142    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0cc709dc995"
	I0719 12:07:58.354058    4225 logs.go:123] Gathering logs for describe nodes ...
	I0719 12:07:58.354068    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 12:07:58.388595    4225 logs.go:123] Gathering logs for kube-apiserver [3f9d05bf1805] ...
	I0719 12:07:58.388610    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f9d05bf1805"
	I0719 12:07:58.403212    4225 logs.go:123] Gathering logs for kube-controller-manager [a4bffd71cffc] ...
	I0719 12:07:58.403223    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4bffd71cffc"
	I0719 12:07:58.423369    4225 logs.go:123] Gathering logs for container status ...
	I0719 12:07:58.423380    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 12:07:58.435030    4225 logs.go:123] Gathering logs for dmesg ...
	I0719 12:07:58.435045    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 12:07:58.439162    4225 logs.go:123] Gathering logs for kube-scheduler [72521dc35142] ...
	I0719 12:07:58.439171    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72521dc35142"
	I0719 12:07:58.454862    4225 logs.go:123] Gathering logs for kube-proxy [1ba3c76948fa] ...
	I0719 12:07:58.454873    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ba3c76948fa"
	I0719 12:07:58.467443    4225 logs.go:123] Gathering logs for Docker ...
	I0719 12:07:58.467455    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 12:07:58.490489    4225 logs.go:123] Gathering logs for kubelet ...
	I0719 12:07:58.490498    4225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 12:07:58.526191    4225 logs.go:123] Gathering logs for coredns [58bb1d8a61b9] ...
	I0719 12:07:58.526201    4225 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58bb1d8a61b9"
	I0719 12:08:01.040727    4225 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 12:08:06.042986    4225 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 12:08:06.046916    4225 out.go:177] 
	W0719 12:08:06.049772    4225 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0719 12:08:06.049789    4225 out.go:239] * 
	* 
	W0719 12:08:06.051973    4225 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:08:06.063799    4225 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-275000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (574.45s)

                                                
                                    
x
+
TestPause/serial/Start (9.9s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-084000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
E0719 12:06:49.509086    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-084000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.83236875s)

                                                
                                                
-- stdout --
	* [pause-084000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-084000" primary control-plane node in "pause-084000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-084000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-084000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-084000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-084000 -n pause-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-084000 -n pause-084000: exit status 7 (67.310209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-733000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-733000 --driver=qemu2 : exit status 80 (9.767850875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-733000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-733000" primary control-plane node in "NoKubernetes-733000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-733000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-733000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-733000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-733000 -n NoKubernetes-733000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-733000 -n NoKubernetes-733000: exit status 7 (64.904083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-733000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-733000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-733000 --no-kubernetes --driver=qemu2 : exit status 80 (5.24456925s)

                                                
                                                
-- stdout --
	* [NoKubernetes-733000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-733000
	* Restarting existing qemu2 VM for "NoKubernetes-733000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-733000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-733000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-733000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-733000 -n NoKubernetes-733000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-733000 -n NoKubernetes-733000: exit status 7 (54.233209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-733000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-733000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-733000 --no-kubernetes --driver=qemu2 : exit status 80 (5.228459458s)

                                                
                                                
-- stdout --
	* [NoKubernetes-733000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-733000
	* Restarting existing qemu2 VM for "NoKubernetes-733000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-733000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-733000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-733000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-733000 -n NoKubernetes-733000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-733000 -n NoKubernetes-733000: exit status 7 (30.53125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-733000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-733000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-733000 --driver=qemu2 : exit status 80 (5.263059583s)

                                                
                                                
-- stdout --
	* [NoKubernetes-733000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-733000
	* Restarting existing qemu2 VM for "NoKubernetes-733000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-733000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-733000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-733000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-733000 -n NoKubernetes-733000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-733000 -n NoKubernetes-733000: exit status 7 (31.005916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-733000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-601000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-601000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.739694125s)

                                                
                                                
-- stdout --
	* [auto-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-601000" primary control-plane node in "auto-601000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-601000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:07:53.608058    4745 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:07:53.608184    4745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:07:53.608187    4745 out.go:304] Setting ErrFile to fd 2...
	I0719 12:07:53.608189    4745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:07:53.608332    4745 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:07:53.609487    4745 out.go:298] Setting JSON to false
	I0719 12:07:53.625906    4745 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4036,"bootTime":1721412037,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 12:07:53.625980    4745 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 12:07:53.631028    4745 out.go:177] * [auto-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 12:07:53.637996    4745 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 12:07:53.638031    4745 notify.go:220] Checking for updates...
	I0719 12:07:53.645006    4745 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 12:07:53.647957    4745 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 12:07:53.651020    4745 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 12:07:53.653942    4745 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 12:07:53.656963    4745 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 12:07:53.660327    4745 config.go:182] Loaded profile config "multinode-281000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:07:53.660392    4745 config.go:182] Loaded profile config "stopped-upgrade-275000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 12:07:53.660440    4745 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 12:07:53.662969    4745 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 12:07:53.669988    4745 start.go:297] selected driver: qemu2
	I0719 12:07:53.669994    4745 start.go:901] validating driver "qemu2" against <nil>
	I0719 12:07:53.669999    4745 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 12:07:53.672173    4745 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 12:07:53.673501    4745 out.go:177] * Automatically selected the socket_vmnet network
	I0719 12:07:53.677042    4745 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 12:07:53.677074    4745 cni.go:84] Creating CNI manager for ""
	I0719 12:07:53.677081    4745 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 12:07:53.677086    4745 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 12:07:53.677112    4745 start.go:340] cluster config:
	{Name:auto-601000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:07:53.680483    4745 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:07:53.687950    4745 out.go:177] * Starting "auto-601000" primary control-plane node in "auto-601000" cluster
	I0719 12:07:53.691991    4745 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 12:07:53.692006    4745 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 12:07:53.692019    4745 cache.go:56] Caching tarball of preloaded images
	I0719 12:07:53.692076    4745 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 12:07:53.692083    4745 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 12:07:53.692146    4745 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/auto-601000/config.json ...
	I0719 12:07:53.692161    4745 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/auto-601000/config.json: {Name:mkb8b23fd555daa49aef9f3d07047895e549f5b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:07:53.692359    4745 start.go:360] acquireMachinesLock for auto-601000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:07:53.692388    4745 start.go:364] duration metric: took 24.584µs to acquireMachinesLock for "auto-601000"
	I0719 12:07:53.692397    4745 start.go:93] Provisioning new machine with config: &{Name:auto-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:07:53.692419    4745 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:07:53.700984    4745 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 12:07:53.716255    4745 start.go:159] libmachine.API.Create for "auto-601000" (driver="qemu2")
	I0719 12:07:53.716279    4745 client.go:168] LocalClient.Create starting
	I0719 12:07:53.716348    4745 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:07:53.716377    4745 main.go:141] libmachine: Decoding PEM data...
	I0719 12:07:53.716385    4745 main.go:141] libmachine: Parsing certificate...
	I0719 12:07:53.716424    4745 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:07:53.716446    4745 main.go:141] libmachine: Decoding PEM data...
	I0719 12:07:53.716452    4745 main.go:141] libmachine: Parsing certificate...
	I0719 12:07:53.716840    4745 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:07:53.853687    4745 main.go:141] libmachine: Creating SSH key...
	I0719 12:07:53.902554    4745 main.go:141] libmachine: Creating Disk image...
	I0719 12:07:53.902562    4745 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:07:53.902722    4745 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/auto-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/auto-601000/disk.qcow2
	I0719 12:07:53.911859    4745 main.go:141] libmachine: STDOUT: 
	I0719 12:07:53.911877    4745 main.go:141] libmachine: STDERR: 
	I0719 12:07:53.911923    4745 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/auto-601000/disk.qcow2 +20000M
	I0719 12:07:53.919732    4745 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:07:53.919748    4745 main.go:141] libmachine: STDERR: 
	I0719 12:07:53.919760    4745 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/auto-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/auto-601000/disk.qcow2
	I0719 12:07:53.919763    4745 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:07:53.919777    4745 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:07:53.919804    4745 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/auto-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/auto-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/auto-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:c9:94:d4:21:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/auto-601000/disk.qcow2
	I0719 12:07:53.921342    4745 main.go:141] libmachine: STDOUT: 
	I0719 12:07:53.921358    4745 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:07:53.921382    4745 client.go:171] duration metric: took 205.102375ms to LocalClient.Create
	I0719 12:07:55.923587    4745 start.go:128] duration metric: took 2.231164958s to createHost
	I0719 12:07:55.923665    4745 start.go:83] releasing machines lock for "auto-601000", held for 2.231298833s
	W0719 12:07:55.923809    4745 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:07:55.937079    4745 out.go:177] * Deleting "auto-601000" in qemu2 ...
	W0719 12:07:55.960483    4745 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:07:55.960512    4745 start.go:729] Will try again in 5 seconds ...
	I0719 12:08:00.962542    4745 start.go:360] acquireMachinesLock for auto-601000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:08:00.962749    4745 start.go:364] duration metric: took 164.958µs to acquireMachinesLock for "auto-601000"
	I0719 12:08:00.962775    4745 start.go:93] Provisioning new machine with config: &{Name:auto-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:08:00.962919    4745 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:08:00.971215    4745 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 12:08:00.998748    4745 start.go:159] libmachine.API.Create for "auto-601000" (driver="qemu2")
	I0719 12:08:00.998792    4745 client.go:168] LocalClient.Create starting
	I0719 12:08:00.998881    4745 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:08:00.998922    4745 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:00.998932    4745 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:00.998984    4745 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:08:00.999015    4745 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:00.999028    4745 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:00.999382    4745 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:08:01.139967    4745 main.go:141] libmachine: Creating SSH key...
	I0719 12:08:01.255325    4745 main.go:141] libmachine: Creating Disk image...
	I0719 12:08:01.255331    4745 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:08:01.255487    4745 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/auto-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/auto-601000/disk.qcow2
	I0719 12:08:01.264893    4745 main.go:141] libmachine: STDOUT: 
	I0719 12:08:01.264915    4745 main.go:141] libmachine: STDERR: 
	I0719 12:08:01.264981    4745 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/auto-601000/disk.qcow2 +20000M
	I0719 12:08:01.273388    4745 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:08:01.273404    4745 main.go:141] libmachine: STDERR: 
	I0719 12:08:01.273424    4745 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/auto-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/auto-601000/disk.qcow2
	I0719 12:08:01.273429    4745 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:08:01.273439    4745 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:08:01.273472    4745 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/auto-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/auto-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/auto-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:79:ac:ad:bb:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/auto-601000/disk.qcow2
	I0719 12:08:01.275336    4745 main.go:141] libmachine: STDOUT: 
	I0719 12:08:01.275351    4745 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:08:01.275364    4745 client.go:171] duration metric: took 276.570375ms to LocalClient.Create
	I0719 12:08:03.277652    4745 start.go:128] duration metric: took 2.314732334s to createHost
	I0719 12:08:03.277735    4745 start.go:83] releasing machines lock for "auto-601000", held for 2.315002917s
	W0719 12:08:03.278205    4745 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:08:03.289605    4745 out.go:177] 
	W0719 12:08:03.293917    4745 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:08:03.293944    4745 out.go:239] * 
	* 
	W0719 12:08:03.296565    4745 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:08:03.305929    4745 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-601000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-601000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.749619042s)

                                                
                                                
-- stdout --
	* [calico-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-601000" primary control-plane node in "calico-601000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-601000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:08:05.466288    4854 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:08:05.466424    4854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:08:05.466428    4854 out.go:304] Setting ErrFile to fd 2...
	I0719 12:08:05.466431    4854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:08:05.466566    4854 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:08:05.467638    4854 out.go:298] Setting JSON to false
	I0719 12:08:05.484101    4854 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4048,"bootTime":1721412037,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 12:08:05.484190    4854 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 12:08:05.490730    4854 out.go:177] * [calico-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 12:08:05.497649    4854 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 12:08:05.497694    4854 notify.go:220] Checking for updates...
	I0719 12:08:05.504735    4854 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 12:08:05.506268    4854 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 12:08:05.509798    4854 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 12:08:05.512780    4854 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 12:08:05.515788    4854 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 12:08:05.519084    4854 config.go:182] Loaded profile config "multinode-281000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:08:05.519149    4854 config.go:182] Loaded profile config "stopped-upgrade-275000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 12:08:05.519198    4854 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 12:08:05.523715    4854 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 12:08:05.530740    4854 start.go:297] selected driver: qemu2
	I0719 12:08:05.530748    4854 start.go:901] validating driver "qemu2" against <nil>
	I0719 12:08:05.530755    4854 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 12:08:05.532985    4854 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 12:08:05.535767    4854 out.go:177] * Automatically selected the socket_vmnet network
	I0719 12:08:05.538813    4854 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 12:08:05.538831    4854 cni.go:84] Creating CNI manager for "calico"
	I0719 12:08:05.538839    4854 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0719 12:08:05.538862    4854 start.go:340] cluster config:
	{Name:calico-601000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:08:05.542753    4854 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:08:05.550736    4854 out.go:177] * Starting "calico-601000" primary control-plane node in "calico-601000" cluster
	I0719 12:08:05.553684    4854 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 12:08:05.553698    4854 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 12:08:05.553708    4854 cache.go:56] Caching tarball of preloaded images
	I0719 12:08:05.553771    4854 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 12:08:05.553776    4854 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 12:08:05.553828    4854 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/calico-601000/config.json ...
	I0719 12:08:05.553841    4854 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/calico-601000/config.json: {Name:mkbc74d0516d07fe73b5878924a805b46f036195 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:08:05.554165    4854 start.go:360] acquireMachinesLock for calico-601000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:08:05.554199    4854 start.go:364] duration metric: took 28.292µs to acquireMachinesLock for "calico-601000"
	I0719 12:08:05.554209    4854 start.go:93] Provisioning new machine with config: &{Name:calico-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:08:05.554241    4854 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:08:05.561724    4854 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 12:08:05.579254    4854 start.go:159] libmachine.API.Create for "calico-601000" (driver="qemu2")
	I0719 12:08:05.579280    4854 client.go:168] LocalClient.Create starting
	I0719 12:08:05.579343    4854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:08:05.579385    4854 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:05.579395    4854 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:05.579430    4854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:08:05.579453    4854 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:05.579462    4854 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:05.579846    4854 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:08:05.717759    4854 main.go:141] libmachine: Creating SSH key...
	I0719 12:08:05.748906    4854 main.go:141] libmachine: Creating Disk image...
	I0719 12:08:05.748911    4854 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:08:05.749066    4854 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/calico-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/calico-601000/disk.qcow2
	I0719 12:08:05.758263    4854 main.go:141] libmachine: STDOUT: 
	I0719 12:08:05.758283    4854 main.go:141] libmachine: STDERR: 
	I0719 12:08:05.758329    4854 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/calico-601000/disk.qcow2 +20000M
	I0719 12:08:05.766684    4854 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:08:05.766699    4854 main.go:141] libmachine: STDERR: 
	I0719 12:08:05.766723    4854 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/calico-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/calico-601000/disk.qcow2
	I0719 12:08:05.766728    4854 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:08:05.766741    4854 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:08:05.766766    4854 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/calico-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/calico-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/calico-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:ae:27:ac:ab:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/calico-601000/disk.qcow2
	I0719 12:08:05.768502    4854 main.go:141] libmachine: STDOUT: 
	I0719 12:08:05.768520    4854 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:08:05.768540    4854 client.go:171] duration metric: took 189.253542ms to LocalClient.Create
	I0719 12:08:07.770619    4854 start.go:128] duration metric: took 2.216396291s to createHost
	I0719 12:08:07.770655    4854 start.go:83] releasing machines lock for "calico-601000", held for 2.216481208s
	W0719 12:08:07.770715    4854 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:08:07.780795    4854 out.go:177] * Deleting "calico-601000" in qemu2 ...
	W0719 12:08:07.800135    4854 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:08:07.800148    4854 start.go:729] Will try again in 5 seconds ...
	I0719 12:08:12.802268    4854 start.go:360] acquireMachinesLock for calico-601000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:08:12.802523    4854 start.go:364] duration metric: took 196.959µs to acquireMachinesLock for "calico-601000"
	I0719 12:08:12.802559    4854 start.go:93] Provisioning new machine with config: &{Name:calico-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:08:12.802781    4854 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:08:12.811639    4854 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 12:08:12.848027    4854 start.go:159] libmachine.API.Create for "calico-601000" (driver="qemu2")
	I0719 12:08:12.848080    4854 client.go:168] LocalClient.Create starting
	I0719 12:08:12.848172    4854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:08:12.848233    4854 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:12.848253    4854 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:12.848303    4854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:08:12.848347    4854 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:12.848355    4854 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:12.848921    4854 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:08:12.995119    4854 main.go:141] libmachine: Creating SSH key...
	I0719 12:08:13.120449    4854 main.go:141] libmachine: Creating Disk image...
	I0719 12:08:13.120456    4854 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:08:13.120621    4854 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/calico-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/calico-601000/disk.qcow2
	I0719 12:08:13.130116    4854 main.go:141] libmachine: STDOUT: 
	I0719 12:08:13.130137    4854 main.go:141] libmachine: STDERR: 
	I0719 12:08:13.130190    4854 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/calico-601000/disk.qcow2 +20000M
	I0719 12:08:13.138530    4854 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:08:13.138547    4854 main.go:141] libmachine: STDERR: 
	I0719 12:08:13.138568    4854 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/calico-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/calico-601000/disk.qcow2
	I0719 12:08:13.138573    4854 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:08:13.138581    4854 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:08:13.138607    4854 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/calico-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/calico-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/calico-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:87:d2:ac:56:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/calico-601000/disk.qcow2
	I0719 12:08:13.140297    4854 main.go:141] libmachine: STDOUT: 
	I0719 12:08:13.140311    4854 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:08:13.140325    4854 client.go:171] duration metric: took 292.245333ms to LocalClient.Create
	I0719 12:08:15.142509    4854 start.go:128] duration metric: took 2.339709542s to createHost
	I0719 12:08:15.142587    4854 start.go:83] releasing machines lock for "calico-601000", held for 2.340077583s
	W0719 12:08:15.143016    4854 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:08:15.156774    4854 out.go:177] 
	W0719 12:08:15.160752    4854 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:08:15.160786    4854 out.go:239] * 
	* 
	W0719 12:08:15.163261    4854 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:08:15.174750    4854 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-601000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-601000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.780531084s)

                                                
                                                
-- stdout --
	* [custom-flannel-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-601000" primary control-plane node in "custom-flannel-601000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-601000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:08:17.518079    4978 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:08:17.518219    4978 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:08:17.518222    4978 out.go:304] Setting ErrFile to fd 2...
	I0719 12:08:17.518225    4978 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:08:17.518380    4978 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:08:17.519471    4978 out.go:298] Setting JSON to false
	I0719 12:08:17.535841    4978 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4060,"bootTime":1721412037,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 12:08:17.535919    4978 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 12:08:17.542829    4978 out.go:177] * [custom-flannel-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 12:08:17.550013    4978 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 12:08:17.550019    4978 notify.go:220] Checking for updates...
	I0719 12:08:17.555869    4978 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 12:08:17.558932    4978 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 12:08:17.560370    4978 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 12:08:17.562844    4978 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 12:08:17.565884    4978 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 12:08:17.569283    4978 config.go:182] Loaded profile config "multinode-281000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:08:17.569347    4978 config.go:182] Loaded profile config "stopped-upgrade-275000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 12:08:17.569393    4978 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 12:08:17.573893    4978 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 12:08:17.580951    4978 start.go:297] selected driver: qemu2
	I0719 12:08:17.580960    4978 start.go:901] validating driver "qemu2" against <nil>
	I0719 12:08:17.580967    4978 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 12:08:17.583215    4978 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 12:08:17.585915    4978 out.go:177] * Automatically selected the socket_vmnet network
	I0719 12:08:17.588970    4978 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 12:08:17.588997    4978 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0719 12:08:17.589005    4978 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0719 12:08:17.589039    4978 start.go:340] cluster config:
	{Name:custom-flannel-601000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:08:17.592613    4978 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:08:17.600868    4978 out.go:177] * Starting "custom-flannel-601000" primary control-plane node in "custom-flannel-601000" cluster
	I0719 12:08:17.604942    4978 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 12:08:17.604959    4978 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 12:08:17.604970    4978 cache.go:56] Caching tarball of preloaded images
	I0719 12:08:17.605035    4978 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 12:08:17.605041    4978 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 12:08:17.605106    4978 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/custom-flannel-601000/config.json ...
	I0719 12:08:17.605119    4978 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/custom-flannel-601000/config.json: {Name:mk2899581aedc60030a7827e895bf0abebad3367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:08:17.605336    4978 start.go:360] acquireMachinesLock for custom-flannel-601000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:08:17.605371    4978 start.go:364] duration metric: took 26.458µs to acquireMachinesLock for "custom-flannel-601000"
	I0719 12:08:17.605381    4978 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:08:17.605414    4978 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:08:17.613944    4978 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 12:08:17.630600    4978 start.go:159] libmachine.API.Create for "custom-flannel-601000" (driver="qemu2")
	I0719 12:08:17.630628    4978 client.go:168] LocalClient.Create starting
	I0719 12:08:17.630689    4978 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:08:17.630722    4978 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:17.630731    4978 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:17.630777    4978 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:08:17.630803    4978 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:17.630810    4978 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:17.631164    4978 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:08:17.769711    4978 main.go:141] libmachine: Creating SSH key...
	I0719 12:08:17.859361    4978 main.go:141] libmachine: Creating Disk image...
	I0719 12:08:17.859370    4978 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:08:17.859561    4978 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/custom-flannel-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/custom-flannel-601000/disk.qcow2
	I0719 12:08:17.868940    4978 main.go:141] libmachine: STDOUT: 
	I0719 12:08:17.868957    4978 main.go:141] libmachine: STDERR: 
	I0719 12:08:17.869024    4978 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/custom-flannel-601000/disk.qcow2 +20000M
	I0719 12:08:17.877062    4978 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:08:17.877077    4978 main.go:141] libmachine: STDERR: 
	I0719 12:08:17.877093    4978 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/custom-flannel-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/custom-flannel-601000/disk.qcow2
	I0719 12:08:17.877097    4978 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:08:17.877108    4978 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:08:17.877133    4978 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/custom-flannel-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/custom-flannel-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/custom-flannel-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:c9:71:79:33:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/custom-flannel-601000/disk.qcow2
	I0719 12:08:17.878735    4978 main.go:141] libmachine: STDOUT: 
	I0719 12:08:17.878750    4978 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:08:17.878768    4978 client.go:171] duration metric: took 248.139958ms to LocalClient.Create
	I0719 12:08:19.880887    4978 start.go:128] duration metric: took 2.275487792s to createHost
	I0719 12:08:19.880965    4978 start.go:83] releasing machines lock for "custom-flannel-601000", held for 2.275618041s
	W0719 12:08:19.881001    4978 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:08:19.897547    4978 out.go:177] * Deleting "custom-flannel-601000" in qemu2 ...
	W0719 12:08:19.917758    4978 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:08:19.917774    4978 start.go:729] Will try again in 5 seconds ...
	I0719 12:08:24.919915    4978 start.go:360] acquireMachinesLock for custom-flannel-601000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:08:24.920398    4978 start.go:364] duration metric: took 389.125µs to acquireMachinesLock for "custom-flannel-601000"
	I0719 12:08:24.920514    4978 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:08:24.920761    4978 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:08:24.929185    4978 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 12:08:24.974793    4978 start.go:159] libmachine.API.Create for "custom-flannel-601000" (driver="qemu2")
	I0719 12:08:24.974854    4978 client.go:168] LocalClient.Create starting
	I0719 12:08:24.974988    4978 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:08:24.975048    4978 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:24.975066    4978 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:24.975132    4978 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:08:24.975178    4978 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:24.975193    4978 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:24.975719    4978 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:08:25.126121    4978 main.go:141] libmachine: Creating SSH key...
	I0719 12:08:25.211302    4978 main.go:141] libmachine: Creating Disk image...
	I0719 12:08:25.211307    4978 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:08:25.211475    4978 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/custom-flannel-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/custom-flannel-601000/disk.qcow2
	I0719 12:08:25.221106    4978 main.go:141] libmachine: STDOUT: 
	I0719 12:08:25.221195    4978 main.go:141] libmachine: STDERR: 
	I0719 12:08:25.221244    4978 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/custom-flannel-601000/disk.qcow2 +20000M
	I0719 12:08:25.229344    4978 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:08:25.229361    4978 main.go:141] libmachine: STDERR: 
	I0719 12:08:25.229376    4978 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/custom-flannel-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/custom-flannel-601000/disk.qcow2
	I0719 12:08:25.229381    4978 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:08:25.229390    4978 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:08:25.229421    4978 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/custom-flannel-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/custom-flannel-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/custom-flannel-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:08:82:9b:5d:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/custom-flannel-601000/disk.qcow2
	I0719 12:08:25.231112    4978 main.go:141] libmachine: STDOUT: 
	I0719 12:08:25.231130    4978 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:08:25.231142    4978 client.go:171] duration metric: took 256.287042ms to LocalClient.Create
	I0719 12:08:27.233284    4978 start.go:128] duration metric: took 2.312525625s to createHost
	I0719 12:08:27.233359    4978 start.go:83] releasing machines lock for "custom-flannel-601000", held for 2.3129765s
	W0719 12:08:27.233717    4978 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:08:27.241983    4978 out.go:177] 
	W0719 12:08:27.247060    4978 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:08:27.247089    4978 out.go:239] * 
	* 
	W0719 12:08:27.248653    4978 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:08:27.257979    4978 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-601000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-601000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.962906792s)

                                                
                                                
-- stdout --
	* [false-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-601000" primary control-plane node in "false-601000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-601000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:08:29.635211    5099 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:08:29.635370    5099 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:08:29.635374    5099 out.go:304] Setting ErrFile to fd 2...
	I0719 12:08:29.635376    5099 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:08:29.635519    5099 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:08:29.636698    5099 out.go:298] Setting JSON to false
	I0719 12:08:29.653598    5099 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4072,"bootTime":1721412037,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 12:08:29.653663    5099 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 12:08:29.660576    5099 out.go:177] * [false-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 12:08:29.667484    5099 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 12:08:29.667540    5099 notify.go:220] Checking for updates...
	I0719 12:08:29.672949    5099 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 12:08:29.678410    5099 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 12:08:29.682453    5099 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 12:08:29.685443    5099 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 12:08:29.692424    5099 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 12:08:29.695801    5099 config.go:182] Loaded profile config "multinode-281000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:08:29.695869    5099 config.go:182] Loaded profile config "stopped-upgrade-275000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 12:08:29.695923    5099 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 12:08:29.700479    5099 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 12:08:29.707439    5099 start.go:297] selected driver: qemu2
	I0719 12:08:29.707444    5099 start.go:901] validating driver "qemu2" against <nil>
	I0719 12:08:29.707449    5099 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 12:08:29.709699    5099 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 12:08:29.712439    5099 out.go:177] * Automatically selected the socket_vmnet network
	I0719 12:08:29.715552    5099 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 12:08:29.715582    5099 cni.go:84] Creating CNI manager for "false"
	I0719 12:08:29.715615    5099 start.go:340] cluster config:
	{Name:false-601000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:08:29.719101    5099 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:08:29.727500    5099 out.go:177] * Starting "false-601000" primary control-plane node in "false-601000" cluster
	I0719 12:08:29.731409    5099 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 12:08:29.731421    5099 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 12:08:29.731430    5099 cache.go:56] Caching tarball of preloaded images
	I0719 12:08:29.731480    5099 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 12:08:29.731485    5099 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 12:08:29.731541    5099 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/false-601000/config.json ...
	I0719 12:08:29.731552    5099 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/false-601000/config.json: {Name:mk08d03b5d66aacc609513a7d2d41229ae16c738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:08:29.731744    5099 start.go:360] acquireMachinesLock for false-601000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:08:29.731783    5099 start.go:364] duration metric: took 33.208µs to acquireMachinesLock for "false-601000"
	I0719 12:08:29.731792    5099 start.go:93] Provisioning new machine with config: &{Name:false-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:08:29.731818    5099 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:08:29.739447    5099 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 12:08:29.754505    5099 start.go:159] libmachine.API.Create for "false-601000" (driver="qemu2")
	I0719 12:08:29.754529    5099 client.go:168] LocalClient.Create starting
	I0719 12:08:29.754587    5099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:08:29.754619    5099 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:29.754627    5099 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:29.754675    5099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:08:29.754698    5099 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:29.754705    5099 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:29.755053    5099 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:08:29.891110    5099 main.go:141] libmachine: Creating SSH key...
	I0719 12:08:30.130191    5099 main.go:141] libmachine: Creating Disk image...
	I0719 12:08:30.130203    5099 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:08:30.130434    5099 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/false-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/false-601000/disk.qcow2
	I0719 12:08:30.140370    5099 main.go:141] libmachine: STDOUT: 
	I0719 12:08:30.140391    5099 main.go:141] libmachine: STDERR: 
	I0719 12:08:30.140443    5099 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/false-601000/disk.qcow2 +20000M
	I0719 12:08:30.148948    5099 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:08:30.148962    5099 main.go:141] libmachine: STDERR: 
	I0719 12:08:30.148976    5099 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/false-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/false-601000/disk.qcow2
	I0719 12:08:30.148981    5099 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:08:30.148993    5099 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:08:30.149018    5099 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/false-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/false-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/false-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:3d:27:07:06:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/false-601000/disk.qcow2
	I0719 12:08:30.150733    5099 main.go:141] libmachine: STDOUT: 
	I0719 12:08:30.150749    5099 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:08:30.150769    5099 client.go:171] duration metric: took 396.242375ms to LocalClient.Create
	I0719 12:08:32.152932    5099 start.go:128] duration metric: took 2.421119375s to createHost
	I0719 12:08:32.153034    5099 start.go:83] releasing machines lock for "false-601000", held for 2.421276625s
	W0719 12:08:32.153087    5099 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:08:32.163023    5099 out.go:177] * Deleting "false-601000" in qemu2 ...
	W0719 12:08:32.186861    5099 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:08:32.186904    5099 start.go:729] Will try again in 5 seconds ...
	I0719 12:08:37.189052    5099 start.go:360] acquireMachinesLock for false-601000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:08:37.189602    5099 start.go:364] duration metric: took 431.375µs to acquireMachinesLock for "false-601000"
	I0719 12:08:37.189811    5099 start.go:93] Provisioning new machine with config: &{Name:false-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:08:37.190069    5099 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:08:37.200675    5099 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 12:08:37.243323    5099 start.go:159] libmachine.API.Create for "false-601000" (driver="qemu2")
	I0719 12:08:37.243386    5099 client.go:168] LocalClient.Create starting
	I0719 12:08:37.243490    5099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:08:37.243541    5099 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:37.243553    5099 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:37.243604    5099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:08:37.243656    5099 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:37.243665    5099 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:37.244186    5099 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:08:37.388390    5099 main.go:141] libmachine: Creating SSH key...
	I0719 12:08:37.506435    5099 main.go:141] libmachine: Creating Disk image...
	I0719 12:08:37.506444    5099 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:08:37.506617    5099 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/false-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/false-601000/disk.qcow2
	I0719 12:08:37.516089    5099 main.go:141] libmachine: STDOUT: 
	I0719 12:08:37.516107    5099 main.go:141] libmachine: STDERR: 
	I0719 12:08:37.516159    5099 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/false-601000/disk.qcow2 +20000M
	I0719 12:08:37.524123    5099 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:08:37.524140    5099 main.go:141] libmachine: STDERR: 
	I0719 12:08:37.524149    5099 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/false-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/false-601000/disk.qcow2
	I0719 12:08:37.524153    5099 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:08:37.524164    5099 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:08:37.524197    5099 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/false-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/false-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/false-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:71:5c:68:2c:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/false-601000/disk.qcow2
	I0719 12:08:37.525961    5099 main.go:141] libmachine: STDOUT: 
	I0719 12:08:37.525978    5099 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:08:37.525991    5099 client.go:171] duration metric: took 282.604583ms to LocalClient.Create
	I0719 12:08:39.528158    5099 start.go:128] duration metric: took 2.33808575s to createHost
	I0719 12:08:39.528225    5099 start.go:83] releasing machines lock for "false-601000", held for 2.338603625s
	W0719 12:08:39.528483    5099 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:08:39.543176    5099 out.go:177] 
	W0719 12:08:39.547695    5099 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:08:39.547729    5099 out.go:239] * 
	* 
	W0719 12:08:39.549101    5099 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:08:39.559115    5099 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (10.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-601000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-601000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (10.469434167s)

                                                
                                                
-- stdout --
	* [kindnet-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-601000" primary control-plane node in "kindnet-601000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-601000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:08:38.912266    5118 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:08:38.912450    5118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:08:38.912453    5118 out.go:304] Setting ErrFile to fd 2...
	I0719 12:08:38.912455    5118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:08:38.912579    5118 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:08:38.913601    5118 out.go:298] Setting JSON to false
	I0719 12:08:38.929742    5118 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4081,"bootTime":1721412037,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 12:08:38.929812    5118 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 12:08:38.934099    5118 out.go:177] * [kindnet-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 12:08:38.941034    5118 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 12:08:38.941089    5118 notify.go:220] Checking for updates...
	I0719 12:08:38.947999    5118 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 12:08:38.950958    5118 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 12:08:38.954025    5118 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 12:08:38.957036    5118 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 12:08:38.959994    5118 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 12:08:38.963386    5118 config.go:182] Loaded profile config "false-601000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:08:38.963455    5118 config.go:182] Loaded profile config "multinode-281000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:08:38.963510    5118 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 12:08:38.967903    5118 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 12:08:38.975003    5118 start.go:297] selected driver: qemu2
	I0719 12:08:38.975008    5118 start.go:901] validating driver "qemu2" against <nil>
	I0719 12:08:38.975014    5118 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 12:08:38.977247    5118 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 12:08:38.979987    5118 out.go:177] * Automatically selected the socket_vmnet network
	I0719 12:08:38.983084    5118 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 12:08:38.983127    5118 cni.go:84] Creating CNI manager for "kindnet"
	I0719 12:08:38.983137    5118 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 12:08:38.983167    5118 start.go:340] cluster config:
	{Name:kindnet-601000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:08:38.986827    5118 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:08:38.994010    5118 out.go:177] * Starting "kindnet-601000" primary control-plane node in "kindnet-601000" cluster
	I0719 12:08:38.997987    5118 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 12:08:38.998001    5118 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 12:08:38.998014    5118 cache.go:56] Caching tarball of preloaded images
	I0719 12:08:38.998073    5118 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 12:08:38.998078    5118 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 12:08:38.998148    5118 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/kindnet-601000/config.json ...
	I0719 12:08:38.998161    5118 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/kindnet-601000/config.json: {Name:mk56bf5f7b97f40ebf794670549a8a1c47e1fb2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:08:38.998387    5118 start.go:360] acquireMachinesLock for kindnet-601000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:08:39.528367    5118 start.go:364] duration metric: took 529.929208ms to acquireMachinesLock for "kindnet-601000"
	I0719 12:08:39.528532    5118 start.go:93] Provisioning new machine with config: &{Name:kindnet-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:08:39.528787    5118 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:08:39.538132    5118 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 12:08:39.582160    5118 start.go:159] libmachine.API.Create for "kindnet-601000" (driver="qemu2")
	I0719 12:08:39.582204    5118 client.go:168] LocalClient.Create starting
	I0719 12:08:39.582337    5118 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:08:39.582412    5118 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:39.582431    5118 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:39.582503    5118 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:08:39.582547    5118 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:39.582563    5118 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:39.583152    5118 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:08:39.731696    5118 main.go:141] libmachine: Creating SSH key...
	I0719 12:08:39.890966    5118 main.go:141] libmachine: Creating Disk image...
	I0719 12:08:39.890978    5118 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:08:39.891179    5118 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kindnet-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kindnet-601000/disk.qcow2
	I0719 12:08:39.900972    5118 main.go:141] libmachine: STDOUT: 
	I0719 12:08:39.900992    5118 main.go:141] libmachine: STDERR: 
	I0719 12:08:39.901051    5118 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kindnet-601000/disk.qcow2 +20000M
	I0719 12:08:39.915285    5118 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:08:39.915301    5118 main.go:141] libmachine: STDERR: 
	I0719 12:08:39.915315    5118 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kindnet-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kindnet-601000/disk.qcow2
	I0719 12:08:39.915320    5118 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:08:39.915330    5118 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:08:39.915356    5118 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kindnet-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kindnet-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kindnet-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:42:7a:7b:1e:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kindnet-601000/disk.qcow2
	I0719 12:08:39.917034    5118 main.go:141] libmachine: STDOUT: 
	I0719 12:08:39.917051    5118 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:08:39.917068    5118 client.go:171] duration metric: took 334.863459ms to LocalClient.Create
	I0719 12:08:41.919138    5118 start.go:128] duration metric: took 2.39034725s to createHost
	I0719 12:08:41.919184    5118 start.go:83] releasing machines lock for "kindnet-601000", held for 2.390821042s
	W0719 12:08:41.919225    5118 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:08:41.935606    5118 out.go:177] * Deleting "kindnet-601000" in qemu2 ...
	W0719 12:08:41.952350    5118 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:08:41.952362    5118 start.go:729] Will try again in 5 seconds ...
	I0719 12:08:46.954570    5118 start.go:360] acquireMachinesLock for kindnet-601000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:08:46.955034    5118 start.go:364] duration metric: took 344.25µs to acquireMachinesLock for "kindnet-601000"
	I0719 12:08:46.955173    5118 start.go:93] Provisioning new machine with config: &{Name:kindnet-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:08:46.955431    5118 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:08:46.966001    5118 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 12:08:47.016401    5118 start.go:159] libmachine.API.Create for "kindnet-601000" (driver="qemu2")
	I0719 12:08:47.016462    5118 client.go:168] LocalClient.Create starting
	I0719 12:08:47.016615    5118 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:08:47.016686    5118 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:47.016702    5118 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:47.016759    5118 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:08:47.016806    5118 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:47.016820    5118 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:47.017324    5118 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:08:47.165174    5118 main.go:141] libmachine: Creating SSH key...
	I0719 12:08:47.291347    5118 main.go:141] libmachine: Creating Disk image...
	I0719 12:08:47.291354    5118 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:08:47.291519    5118 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kindnet-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kindnet-601000/disk.qcow2
	I0719 12:08:47.300809    5118 main.go:141] libmachine: STDOUT: 
	I0719 12:08:47.300827    5118 main.go:141] libmachine: STDERR: 
	I0719 12:08:47.300874    5118 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kindnet-601000/disk.qcow2 +20000M
	I0719 12:08:47.308730    5118 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:08:47.308744    5118 main.go:141] libmachine: STDERR: 
	I0719 12:08:47.308761    5118 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kindnet-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kindnet-601000/disk.qcow2
	I0719 12:08:47.308766    5118 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:08:47.308775    5118 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:08:47.308812    5118 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kindnet-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kindnet-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kindnet-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:53:da:12:80:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kindnet-601000/disk.qcow2
	I0719 12:08:47.310432    5118 main.go:141] libmachine: STDOUT: 
	I0719 12:08:47.310450    5118 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:08:47.310466    5118 client.go:171] duration metric: took 294.003292ms to LocalClient.Create
	I0719 12:08:49.312686    5118 start.go:128] duration metric: took 2.357215625s to createHost
	I0719 12:08:49.312755    5118 start.go:83] releasing machines lock for "kindnet-601000", held for 2.357728042s
	W0719 12:08:49.313053    5118 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:08:49.328649    5118 out.go:177] 
	W0719 12:08:49.331614    5118 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:08:49.331649    5118 out.go:239] * 
	* 
	W0719 12:08:49.334600    5118 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:08:49.342541    5118 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (10.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-601000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-601000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.9751035s)

                                                
                                                
-- stdout --
	* [flannel-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-601000" primary control-plane node in "flannel-601000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-601000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:08:41.728312    5223 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:08:41.728429    5223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:08:41.728432    5223 out.go:304] Setting ErrFile to fd 2...
	I0719 12:08:41.728435    5223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:08:41.728567    5223 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:08:41.729660    5223 out.go:298] Setting JSON to false
	I0719 12:08:41.745788    5223 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4084,"bootTime":1721412037,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 12:08:41.745873    5223 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 12:08:41.751073    5223 out.go:177] * [flannel-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 12:08:41.757982    5223 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 12:08:41.758023    5223 notify.go:220] Checking for updates...
	I0719 12:08:41.764932    5223 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 12:08:41.767980    5223 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 12:08:41.770993    5223 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 12:08:41.773929    5223 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 12:08:41.776975    5223 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 12:08:41.780324    5223 config.go:182] Loaded profile config "kindnet-601000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:08:41.780395    5223 config.go:182] Loaded profile config "multinode-281000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:08:41.780465    5223 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 12:08:41.784928    5223 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 12:08:41.790939    5223 start.go:297] selected driver: qemu2
	I0719 12:08:41.790947    5223 start.go:901] validating driver "qemu2" against <nil>
	I0719 12:08:41.790959    5223 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 12:08:41.793415    5223 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 12:08:41.795886    5223 out.go:177] * Automatically selected the socket_vmnet network
	I0719 12:08:41.799073    5223 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 12:08:41.799089    5223 cni.go:84] Creating CNI manager for "flannel"
	I0719 12:08:41.799103    5223 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0719 12:08:41.799141    5223 start.go:340] cluster config:
	{Name:flannel-601000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:08:41.803026    5223 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:08:41.809962    5223 out.go:177] * Starting "flannel-601000" primary control-plane node in "flannel-601000" cluster
	I0719 12:08:41.813988    5223 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 12:08:41.814004    5223 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 12:08:41.814017    5223 cache.go:56] Caching tarball of preloaded images
	I0719 12:08:41.814077    5223 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 12:08:41.814084    5223 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 12:08:41.814155    5223 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/flannel-601000/config.json ...
	I0719 12:08:41.814169    5223 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/flannel-601000/config.json: {Name:mka52ec001e0ab4063c9c9dba5ba4dd752b055a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:08:41.814387    5223 start.go:360] acquireMachinesLock for flannel-601000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:08:41.919246    5223 start.go:364] duration metric: took 104.850875ms to acquireMachinesLock for "flannel-601000"
	I0719 12:08:41.919273    5223 start.go:93] Provisioning new machine with config: &{Name:flannel-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:08:41.919393    5223 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:08:41.926693    5223 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 12:08:41.960787    5223 start.go:159] libmachine.API.Create for "flannel-601000" (driver="qemu2")
	I0719 12:08:41.960823    5223 client.go:168] LocalClient.Create starting
	I0719 12:08:41.960902    5223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:08:41.960948    5223 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:41.960959    5223 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:41.961014    5223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:08:41.961048    5223 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:41.961062    5223 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:41.961943    5223 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:08:42.106682    5223 main.go:141] libmachine: Creating SSH key...
	I0719 12:08:42.192688    5223 main.go:141] libmachine: Creating Disk image...
	I0719 12:08:42.192697    5223 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:08:42.192896    5223 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/flannel-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/flannel-601000/disk.qcow2
	I0719 12:08:42.202025    5223 main.go:141] libmachine: STDOUT: 
	I0719 12:08:42.202042    5223 main.go:141] libmachine: STDERR: 
	I0719 12:08:42.202090    5223 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/flannel-601000/disk.qcow2 +20000M
	I0719 12:08:42.209871    5223 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:08:42.209884    5223 main.go:141] libmachine: STDERR: 
	I0719 12:08:42.209905    5223 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/flannel-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/flannel-601000/disk.qcow2
	I0719 12:08:42.209912    5223 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:08:42.209924    5223 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:08:42.209950    5223 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/flannel-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/flannel-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/flannel-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:97:9c:d2:f1:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/flannel-601000/disk.qcow2
	I0719 12:08:42.211569    5223 main.go:141] libmachine: STDOUT: 
	I0719 12:08:42.211592    5223 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:08:42.211609    5223 client.go:171] duration metric: took 250.784916ms to LocalClient.Create
	I0719 12:08:44.213766    5223 start.go:128] duration metric: took 2.294381167s to createHost
	I0719 12:08:44.213821    5223 start.go:83] releasing machines lock for "flannel-601000", held for 2.294592334s
	W0719 12:08:44.213894    5223 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:08:44.230273    5223 out.go:177] * Deleting "flannel-601000" in qemu2 ...
	W0719 12:08:44.257409    5223 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:08:44.257439    5223 start.go:729] Will try again in 5 seconds ...
	I0719 12:08:49.259595    5223 start.go:360] acquireMachinesLock for flannel-601000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:08:49.312890    5223 start.go:364] duration metric: took 53.203417ms to acquireMachinesLock for "flannel-601000"
	I0719 12:08:49.313036    5223 start.go:93] Provisioning new machine with config: &{Name:flannel-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:08:49.313298    5223 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:08:49.322555    5223 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 12:08:49.372418    5223 start.go:159] libmachine.API.Create for "flannel-601000" (driver="qemu2")
	I0719 12:08:49.372480    5223 client.go:168] LocalClient.Create starting
	I0719 12:08:49.372601    5223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:08:49.372653    5223 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:49.372675    5223 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:49.372742    5223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:08:49.372773    5223 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:49.372792    5223 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:49.373280    5223 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:08:49.521870    5223 main.go:141] libmachine: Creating SSH key...
	I0719 12:08:49.617897    5223 main.go:141] libmachine: Creating Disk image...
	I0719 12:08:49.617912    5223 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:08:49.618116    5223 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/flannel-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/flannel-601000/disk.qcow2
	I0719 12:08:49.627868    5223 main.go:141] libmachine: STDOUT: 
	I0719 12:08:49.627890    5223 main.go:141] libmachine: STDERR: 
	I0719 12:08:49.627953    5223 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/flannel-601000/disk.qcow2 +20000M
	I0719 12:08:49.636573    5223 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:08:49.636595    5223 main.go:141] libmachine: STDERR: 
	I0719 12:08:49.636615    5223 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/flannel-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/flannel-601000/disk.qcow2
	I0719 12:08:49.636619    5223 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:08:49.636631    5223 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:08:49.636659    5223 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/flannel-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/flannel-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/flannel-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:61:c5:09:2e:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/flannel-601000/disk.qcow2
	I0719 12:08:49.638633    5223 main.go:141] libmachine: STDOUT: 
	I0719 12:08:49.638653    5223 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:08:49.638666    5223 client.go:171] duration metric: took 266.174875ms to LocalClient.Create
	I0719 12:08:51.640706    5223 start.go:128] duration metric: took 2.327423708s to createHost
	I0719 12:08:51.640721    5223 start.go:83] releasing machines lock for "flannel-601000", held for 2.327842792s
	W0719 12:08:51.640815    5223 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:08:51.649365    5223 out.go:177] 
	W0719 12:08:51.653363    5223 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:08:51.653368    5223 out.go:239] * 
	* 
	W0719 12:08:51.653830    5223 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:08:51.664390    5223 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-601000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-601000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.866319084s)

                                                
                                                
-- stdout --
	* [enable-default-cni-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-601000" primary control-plane node in "enable-default-cni-601000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-601000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:08:51.604013    5347 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:08:51.604141    5347 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:08:51.604145    5347 out.go:304] Setting ErrFile to fd 2...
	I0719 12:08:51.604150    5347 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:08:51.604282    5347 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:08:51.605348    5347 out.go:298] Setting JSON to false
	I0719 12:08:51.621350    5347 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4094,"bootTime":1721412037,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 12:08:51.621432    5347 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 12:08:51.627420    5347 out.go:177] * [enable-default-cni-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 12:08:51.634403    5347 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 12:08:51.634453    5347 notify.go:220] Checking for updates...
	I0719 12:08:51.642279    5347 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 12:08:51.653355    5347 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 12:08:51.664373    5347 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 12:08:51.676351    5347 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 12:08:51.683350    5347 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 12:08:51.687742    5347 config.go:182] Loaded profile config "flannel-601000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:08:51.687817    5347 config.go:182] Loaded profile config "multinode-281000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:08:51.687869    5347 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 12:08:51.692335    5347 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 12:08:51.699352    5347 start.go:297] selected driver: qemu2
	I0719 12:08:51.699358    5347 start.go:901] validating driver "qemu2" against <nil>
	I0719 12:08:51.699368    5347 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 12:08:51.701811    5347 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 12:08:51.706373    5347 out.go:177] * Automatically selected the socket_vmnet network
	E0719 12:08:51.710406    5347 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0719 12:08:51.710426    5347 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 12:08:51.710446    5347 cni.go:84] Creating CNI manager for "bridge"
	I0719 12:08:51.710450    5347 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 12:08:51.710496    5347 start.go:340] cluster config:
	{Name:enable-default-cni-601000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:08:51.714163    5347 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:08:51.718339    5347 out.go:177] * Starting "enable-default-cni-601000" primary control-plane node in "enable-default-cni-601000" cluster
	I0719 12:08:51.726330    5347 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 12:08:51.726387    5347 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 12:08:51.726419    5347 cache.go:56] Caching tarball of preloaded images
	I0719 12:08:51.726510    5347 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 12:08:51.726518    5347 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 12:08:51.726591    5347 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/enable-default-cni-601000/config.json ...
	I0719 12:08:51.726604    5347 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/enable-default-cni-601000/config.json: {Name:mkc92f98f274e2b866b3d28d795ee08a81645ec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:08:51.726872    5347 start.go:360] acquireMachinesLock for enable-default-cni-601000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:08:51.726912    5347 start.go:364] duration metric: took 32.125µs to acquireMachinesLock for "enable-default-cni-601000"
	I0719 12:08:51.726923    5347 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:08:51.726958    5347 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:08:51.735321    5347 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 12:08:51.751356    5347 start.go:159] libmachine.API.Create for "enable-default-cni-601000" (driver="qemu2")
	I0719 12:08:51.751385    5347 client.go:168] LocalClient.Create starting
	I0719 12:08:51.751468    5347 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:08:51.751499    5347 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:51.751507    5347 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:51.751542    5347 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:08:51.751564    5347 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:51.751572    5347 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:51.752030    5347 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:08:51.889719    5347 main.go:141] libmachine: Creating SSH key...
	I0719 12:08:52.014927    5347 main.go:141] libmachine: Creating Disk image...
	I0719 12:08:52.014938    5347 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:08:52.015483    5347 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/enable-default-cni-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/enable-default-cni-601000/disk.qcow2
	I0719 12:08:52.025456    5347 main.go:141] libmachine: STDOUT: 
	I0719 12:08:52.025476    5347 main.go:141] libmachine: STDERR: 
	I0719 12:08:52.025546    5347 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/enable-default-cni-601000/disk.qcow2 +20000M
	I0719 12:08:52.034310    5347 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:08:52.034336    5347 main.go:141] libmachine: STDERR: 
	I0719 12:08:52.034357    5347 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/enable-default-cni-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/enable-default-cni-601000/disk.qcow2
	I0719 12:08:52.034365    5347 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:08:52.034379    5347 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:08:52.034411    5347 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/enable-default-cni-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/enable-default-cni-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/enable-default-cni-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:13:3d:d3:3f:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/enable-default-cni-601000/disk.qcow2
	I0719 12:08:52.036185    5347 main.go:141] libmachine: STDOUT: 
	I0719 12:08:52.036200    5347 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:08:52.036218    5347 client.go:171] duration metric: took 284.833333ms to LocalClient.Create
	I0719 12:08:54.038264    5347 start.go:128] duration metric: took 2.311331208s to createHost
	I0719 12:08:54.038317    5347 start.go:83] releasing machines lock for "enable-default-cni-601000", held for 2.311415167s
	W0719 12:08:54.038339    5347 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:08:54.050392    5347 out.go:177] * Deleting "enable-default-cni-601000" in qemu2 ...
	W0719 12:08:54.063817    5347 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:08:54.063826    5347 start.go:729] Will try again in 5 seconds ...
	I0719 12:08:59.065953    5347 start.go:360] acquireMachinesLock for enable-default-cni-601000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:08:59.066433    5347 start.go:364] duration metric: took 382.25µs to acquireMachinesLock for "enable-default-cni-601000"
	I0719 12:08:59.066663    5347 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:08:59.066913    5347 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:08:59.081195    5347 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 12:08:59.130367    5347 start.go:159] libmachine.API.Create for "enable-default-cni-601000" (driver="qemu2")
	I0719 12:08:59.130411    5347 client.go:168] LocalClient.Create starting
	I0719 12:08:59.130511    5347 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:08:59.130572    5347 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:59.130586    5347 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:59.130649    5347 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:08:59.130693    5347 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:59.130703    5347 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:59.131522    5347 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:08:59.279331    5347 main.go:141] libmachine: Creating SSH key...
	I0719 12:08:59.378752    5347 main.go:141] libmachine: Creating Disk image...
	I0719 12:08:59.378757    5347 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:08:59.378928    5347 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/enable-default-cni-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/enable-default-cni-601000/disk.qcow2
	I0719 12:08:59.388258    5347 main.go:141] libmachine: STDOUT: 
	I0719 12:08:59.388280    5347 main.go:141] libmachine: STDERR: 
	I0719 12:08:59.388335    5347 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/enable-default-cni-601000/disk.qcow2 +20000M
	I0719 12:08:59.396171    5347 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:08:59.396186    5347 main.go:141] libmachine: STDERR: 
	I0719 12:08:59.396197    5347 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/enable-default-cni-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/enable-default-cni-601000/disk.qcow2
	I0719 12:08:59.396208    5347 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:08:59.396224    5347 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:08:59.396251    5347 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/enable-default-cni-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/enable-default-cni-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/enable-default-cni-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:ad:ec:32:72:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/enable-default-cni-601000/disk.qcow2
	I0719 12:08:59.397893    5347 main.go:141] libmachine: STDOUT: 
	I0719 12:08:59.397909    5347 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:08:59.397921    5347 client.go:171] duration metric: took 267.508541ms to LocalClient.Create
	I0719 12:09:01.400081    5347 start.go:128] duration metric: took 2.333171709s to createHost
	I0719 12:09:01.400125    5347 start.go:83] releasing machines lock for "enable-default-cni-601000", held for 2.333639833s
	W0719 12:09:01.400539    5347 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:01.410224    5347 out.go:177] 
	W0719 12:09:01.416365    5347 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:09:01.416390    5347 out.go:239] * 
	* 
	W0719 12:09:01.418901    5347 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:09:01.428211    5347 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (10.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-601000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-601000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (10.139897125s)

                                                
                                                
-- stdout --
	* [bridge-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-601000" primary control-plane node in "bridge-601000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-601000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:08:53.978851    5460 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:08:53.978974    5460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:08:53.978977    5460 out.go:304] Setting ErrFile to fd 2...
	I0719 12:08:53.978979    5460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:08:53.979101    5460 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:08:53.980124    5460 out.go:298] Setting JSON to false
	I0719 12:08:53.996306    5460 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4096,"bootTime":1721412037,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 12:08:53.996367    5460 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 12:08:54.001557    5460 out.go:177] * [bridge-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 12:08:54.008432    5460 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 12:08:54.008482    5460 notify.go:220] Checking for updates...
	I0719 12:08:54.015465    5460 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 12:08:54.018494    5460 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 12:08:54.021432    5460 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 12:08:54.024610    5460 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 12:08:54.027422    5460 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 12:08:54.030748    5460 config.go:182] Loaded profile config "enable-default-cni-601000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:08:54.030819    5460 config.go:182] Loaded profile config "multinode-281000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:08:54.030862    5460 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 12:08:54.035474    5460 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 12:08:54.042429    5460 start.go:297] selected driver: qemu2
	I0719 12:08:54.042443    5460 start.go:901] validating driver "qemu2" against <nil>
	I0719 12:08:54.042451    5460 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 12:08:54.044960    5460 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 12:08:54.054465    5460 out.go:177] * Automatically selected the socket_vmnet network
	I0719 12:08:54.061521    5460 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 12:08:54.061541    5460 cni.go:84] Creating CNI manager for "bridge"
	I0719 12:08:54.061547    5460 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 12:08:54.061589    5460 start.go:340] cluster config:
	{Name:bridge-601000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:08:54.065648    5460 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:08:54.079449    5460 out.go:177] * Starting "bridge-601000" primary control-plane node in "bridge-601000" cluster
	I0719 12:08:54.086902    5460 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 12:08:54.086920    5460 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 12:08:54.086936    5460 cache.go:56] Caching tarball of preloaded images
	I0719 12:08:54.087025    5460 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 12:08:54.087031    5460 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 12:08:54.087098    5460 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/bridge-601000/config.json ...
	I0719 12:08:54.087111    5460 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/bridge-601000/config.json: {Name:mkbc05990c854432ad16a4b872e7e89509fd5301 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:08:54.087352    5460 start.go:360] acquireMachinesLock for bridge-601000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:08:54.087391    5460 start.go:364] duration metric: took 32.166µs to acquireMachinesLock for "bridge-601000"
	I0719 12:08:54.087403    5460 start.go:93] Provisioning new machine with config: &{Name:bridge-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:08:54.087440    5460 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:08:54.091530    5460 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 12:08:54.110665    5460 start.go:159] libmachine.API.Create for "bridge-601000" (driver="qemu2")
	I0719 12:08:54.110694    5460 client.go:168] LocalClient.Create starting
	I0719 12:08:54.110760    5460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:08:54.110793    5460 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:54.110803    5460 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:54.110842    5460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:08:54.110868    5460 main.go:141] libmachine: Decoding PEM data...
	I0719 12:08:54.110879    5460 main.go:141] libmachine: Parsing certificate...
	I0719 12:08:54.111283    5460 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:08:54.248550    5460 main.go:141] libmachine: Creating SSH key...
	I0719 12:08:54.605129    5460 main.go:141] libmachine: Creating Disk image...
	I0719 12:08:54.605142    5460 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:08:54.605397    5460 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/bridge-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/bridge-601000/disk.qcow2
	I0719 12:08:54.615451    5460 main.go:141] libmachine: STDOUT: 
	I0719 12:08:54.615471    5460 main.go:141] libmachine: STDERR: 
	I0719 12:08:54.615530    5460 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/bridge-601000/disk.qcow2 +20000M
	I0719 12:08:54.623440    5460 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:08:54.623454    5460 main.go:141] libmachine: STDERR: 
	I0719 12:08:54.623466    5460 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/bridge-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/bridge-601000/disk.qcow2
	I0719 12:08:54.623470    5460 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:08:54.623485    5460 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:08:54.623508    5460 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/bridge-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/bridge-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/bridge-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:31:7a:1f:1d:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/bridge-601000/disk.qcow2
	I0719 12:08:54.625169    5460 main.go:141] libmachine: STDOUT: 
	I0719 12:08:54.625185    5460 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:08:54.625212    5460 client.go:171] duration metric: took 514.521792ms to LocalClient.Create
	I0719 12:08:56.627375    5460 start.go:128] duration metric: took 2.539942833s to createHost
	I0719 12:08:56.627435    5460 start.go:83] releasing machines lock for "bridge-601000", held for 2.5400685s
	W0719 12:08:56.627492    5460 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:08:56.643563    5460 out.go:177] * Deleting "bridge-601000" in qemu2 ...
	W0719 12:08:56.669988    5460 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:08:56.670015    5460 start.go:729] Will try again in 5 seconds ...
	I0719 12:09:01.672013    5460 start.go:360] acquireMachinesLock for bridge-601000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:09:01.672111    5460 start.go:364] duration metric: took 67.958µs to acquireMachinesLock for "bridge-601000"
	I0719 12:09:01.672145    5460 start.go:93] Provisioning new machine with config: &{Name:bridge-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:09:01.672179    5460 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:09:01.676389    5460 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 12:09:01.692237    5460 start.go:159] libmachine.API.Create for "bridge-601000" (driver="qemu2")
	I0719 12:09:01.692266    5460 client.go:168] LocalClient.Create starting
	I0719 12:09:01.692333    5460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:09:01.692365    5460 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:01.692373    5460 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:01.692408    5460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:09:01.692424    5460 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:01.692428    5460 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:01.692783    5460 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:09:01.866375    5460 main.go:141] libmachine: Creating SSH key...
	I0719 12:09:02.029872    5460 main.go:141] libmachine: Creating Disk image...
	I0719 12:09:02.029884    5460 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:09:02.032613    5460 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/bridge-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/bridge-601000/disk.qcow2
	I0719 12:09:02.042126    5460 main.go:141] libmachine: STDOUT: 
	I0719 12:09:02.042151    5460 main.go:141] libmachine: STDERR: 
	I0719 12:09:02.042214    5460 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/bridge-601000/disk.qcow2 +20000M
	I0719 12:09:02.051262    5460 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:09:02.051279    5460 main.go:141] libmachine: STDERR: 
	I0719 12:09:02.051294    5460 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/bridge-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/bridge-601000/disk.qcow2
	I0719 12:09:02.051299    5460 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:09:02.051312    5460 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:09:02.051349    5460 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/bridge-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/bridge-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/bridge-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:8f:0d:e5:4d:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/bridge-601000/disk.qcow2
	I0719 12:09:02.053257    5460 main.go:141] libmachine: STDOUT: 
	I0719 12:09:02.053277    5460 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:09:02.053290    5460 client.go:171] duration metric: took 361.026125ms to LocalClient.Create
	I0719 12:09:04.055571    5460 start.go:128] duration metric: took 2.38334975s to createHost
	I0719 12:09:04.055679    5460 start.go:83] releasing machines lock for "bridge-601000", held for 2.383588709s
	W0719 12:09:04.056008    5460 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:04.068559    5460 out.go:177] 
	W0719 12:09:04.071548    5460 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:09:04.071581    5460 out.go:239] * 
	* 
	W0719 12:09:04.074121    5460 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:09:04.081396    5460 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (10.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-601000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-601000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (10.196387708s)

                                                
                                                
-- stdout --
	* [kubenet-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-601000" primary control-plane node in "kubenet-601000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-601000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:09:03.621826    5575 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:09:03.621956    5575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:03.621960    5575 out.go:304] Setting ErrFile to fd 2...
	I0719 12:09:03.621962    5575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:03.622091    5575 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:09:03.623158    5575 out.go:298] Setting JSON to false
	I0719 12:09:03.639258    5575 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4106,"bootTime":1721412037,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 12:09:03.639322    5575 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 12:09:03.645542    5575 out.go:177] * [kubenet-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 12:09:03.652668    5575 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 12:09:03.652708    5575 notify.go:220] Checking for updates...
	I0719 12:09:03.659619    5575 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 12:09:03.662653    5575 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 12:09:03.664167    5575 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 12:09:03.667632    5575 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 12:09:03.670692    5575 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 12:09:03.674050    5575 config.go:182] Loaded profile config "bridge-601000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:09:03.674126    5575 config.go:182] Loaded profile config "multinode-281000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:09:03.674175    5575 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 12:09:03.678585    5575 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 12:09:03.685661    5575 start.go:297] selected driver: qemu2
	I0719 12:09:03.685670    5575 start.go:901] validating driver "qemu2" against <nil>
	I0719 12:09:03.685678    5575 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 12:09:03.688109    5575 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 12:09:03.691613    5575 out.go:177] * Automatically selected the socket_vmnet network
	I0719 12:09:03.694795    5575 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 12:09:03.694822    5575 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0719 12:09:03.694853    5575 start.go:340] cluster config:
	{Name:kubenet-601000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:09:03.698692    5575 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:03.706630    5575 out.go:177] * Starting "kubenet-601000" primary control-plane node in "kubenet-601000" cluster
	I0719 12:09:03.710639    5575 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 12:09:03.710654    5575 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 12:09:03.710663    5575 cache.go:56] Caching tarball of preloaded images
	I0719 12:09:03.710722    5575 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 12:09:03.710728    5575 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 12:09:03.710791    5575 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/kubenet-601000/config.json ...
	I0719 12:09:03.710804    5575 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/kubenet-601000/config.json: {Name:mk8d232f9ce2374d6ecbb4e0b5a620685d1bd8ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:09:03.711021    5575 start.go:360] acquireMachinesLock for kubenet-601000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:09:04.055851    5575 start.go:364] duration metric: took 344.809792ms to acquireMachinesLock for "kubenet-601000"
	I0719 12:09:04.055958    5575 start.go:93] Provisioning new machine with config: &{Name:kubenet-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:09:04.056161    5575 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:09:04.064472    5575 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 12:09:04.113356    5575 start.go:159] libmachine.API.Create for "kubenet-601000" (driver="qemu2")
	I0719 12:09:04.113412    5575 client.go:168] LocalClient.Create starting
	I0719 12:09:04.113539    5575 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:09:04.113600    5575 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:04.113617    5575 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:04.113684    5575 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:09:04.113724    5575 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:04.113741    5575 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:04.114284    5575 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:09:04.259012    5575 main.go:141] libmachine: Creating SSH key...
	I0719 12:09:04.316967    5575 main.go:141] libmachine: Creating Disk image...
	I0719 12:09:04.316978    5575 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:09:04.317174    5575 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubenet-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubenet-601000/disk.qcow2
	I0719 12:09:04.327174    5575 main.go:141] libmachine: STDOUT: 
	I0719 12:09:04.327197    5575 main.go:141] libmachine: STDERR: 
	I0719 12:09:04.327262    5575 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubenet-601000/disk.qcow2 +20000M
	I0719 12:09:04.336235    5575 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:09:04.336256    5575 main.go:141] libmachine: STDERR: 
	I0719 12:09:04.336275    5575 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubenet-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubenet-601000/disk.qcow2
	I0719 12:09:04.336287    5575 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:09:04.336311    5575 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:09:04.336342    5575 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubenet-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubenet-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubenet-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:fb:7c:cc:c7:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubenet-601000/disk.qcow2
	I0719 12:09:04.338226    5575 main.go:141] libmachine: STDOUT: 
	I0719 12:09:04.338243    5575 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:09:04.338265    5575 client.go:171] duration metric: took 224.849291ms to LocalClient.Create
	I0719 12:09:06.340367    5575 start.go:128] duration metric: took 2.284223667s to createHost
	I0719 12:09:06.340383    5575 start.go:83] releasing machines lock for "kubenet-601000", held for 2.28453825s
	W0719 12:09:06.340405    5575 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:06.354235    5575 out.go:177] * Deleting "kubenet-601000" in qemu2 ...
	W0719 12:09:06.363693    5575 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:06.363707    5575 start.go:729] Will try again in 5 seconds ...
	I0719 12:09:11.365811    5575 start.go:360] acquireMachinesLock for kubenet-601000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:09:11.366212    5575 start.go:364] duration metric: took 320.375µs to acquireMachinesLock for "kubenet-601000"
	I0719 12:09:11.366321    5575 start.go:93] Provisioning new machine with config: &{Name:kubenet-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:09:11.366614    5575 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:09:11.381134    5575 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 12:09:11.434190    5575 start.go:159] libmachine.API.Create for "kubenet-601000" (driver="qemu2")
	I0719 12:09:11.434240    5575 client.go:168] LocalClient.Create starting
	I0719 12:09:11.434346    5575 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:09:11.434406    5575 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:11.434419    5575 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:11.434482    5575 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:09:11.434526    5575 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:11.434536    5575 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:11.435055    5575 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:09:11.587102    5575 main.go:141] libmachine: Creating SSH key...
	I0719 12:09:11.723930    5575 main.go:141] libmachine: Creating Disk image...
	I0719 12:09:11.723936    5575 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:09:11.724124    5575 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubenet-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubenet-601000/disk.qcow2
	I0719 12:09:11.733469    5575 main.go:141] libmachine: STDOUT: 
	I0719 12:09:11.733489    5575 main.go:141] libmachine: STDERR: 
	I0719 12:09:11.733541    5575 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubenet-601000/disk.qcow2 +20000M
	I0719 12:09:11.741403    5575 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:09:11.741418    5575 main.go:141] libmachine: STDERR: 
	I0719 12:09:11.741430    5575 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubenet-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubenet-601000/disk.qcow2
	I0719 12:09:11.741433    5575 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:09:11.741448    5575 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:09:11.741481    5575 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubenet-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubenet-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubenet-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:91:a4:f0:94:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/kubenet-601000/disk.qcow2
	I0719 12:09:11.743142    5575 main.go:141] libmachine: STDOUT: 
	I0719 12:09:11.743159    5575 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:09:11.743172    5575 client.go:171] duration metric: took 308.929584ms to LocalClient.Create
	I0719 12:09:13.745281    5575 start.go:128] duration metric: took 2.378664583s to createHost
	I0719 12:09:13.745334    5575 start.go:83] releasing machines lock for "kubenet-601000", held for 2.379132375s
	W0719 12:09:13.745724    5575 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:13.755244    5575 out.go:177] 
	W0719 12:09:13.768386    5575 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:09:13.768423    5575 out.go:239] * 
	* 
	W0719 12:09:13.771061    5575 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:09:13.779226    5575 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (10.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-120000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-120000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.89958875s)

                                                
                                                
-- stdout --
	* [old-k8s-version-120000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-120000" primary control-plane node in "old-k8s-version-120000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-120000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:09:06.240880    5680 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:09:06.241019    5680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:06.241022    5680 out.go:304] Setting ErrFile to fd 2...
	I0719 12:09:06.241025    5680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:06.241154    5680 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:09:06.242170    5680 out.go:298] Setting JSON to false
	I0719 12:09:06.258303    5680 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4109,"bootTime":1721412037,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 12:09:06.258377    5680 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 12:09:06.263496    5680 out.go:177] * [old-k8s-version-120000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 12:09:06.270304    5680 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 12:09:06.270339    5680 notify.go:220] Checking for updates...
	I0719 12:09:06.277275    5680 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 12:09:06.280300    5680 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 12:09:06.283319    5680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 12:09:06.286257    5680 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 12:09:06.289276    5680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 12:09:06.292682    5680 config.go:182] Loaded profile config "kubenet-601000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:09:06.292748    5680 config.go:182] Loaded profile config "multinode-281000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:09:06.292796    5680 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 12:09:06.297226    5680 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 12:09:06.304315    5680 start.go:297] selected driver: qemu2
	I0719 12:09:06.304325    5680 start.go:901] validating driver "qemu2" against <nil>
	I0719 12:09:06.304332    5680 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 12:09:06.306574    5680 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 12:09:06.309268    5680 out.go:177] * Automatically selected the socket_vmnet network
	I0719 12:09:06.312376    5680 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 12:09:06.312420    5680 cni.go:84] Creating CNI manager for ""
	I0719 12:09:06.312428    5680 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 12:09:06.312460    5680 start.go:340] cluster config:
	{Name:old-k8s-version-120000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:09:06.316226    5680 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:06.322277    5680 out.go:177] * Starting "old-k8s-version-120000" primary control-plane node in "old-k8s-version-120000" cluster
	I0719 12:09:06.326334    5680 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 12:09:06.326352    5680 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0719 12:09:06.326364    5680 cache.go:56] Caching tarball of preloaded images
	I0719 12:09:06.326437    5680 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 12:09:06.326442    5680 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0719 12:09:06.326518    5680 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/old-k8s-version-120000/config.json ...
	I0719 12:09:06.326530    5680 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/old-k8s-version-120000/config.json: {Name:mke6e325f3b3f2507c705a35d9dede0178f2b086 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:09:06.326873    5680 start.go:360] acquireMachinesLock for old-k8s-version-120000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:09:06.340413    5680 start.go:364] duration metric: took 13.531667ms to acquireMachinesLock for "old-k8s-version-120000"
	I0719 12:09:06.340432    5680 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:09:06.340473    5680 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:09:06.345307    5680 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 12:09:06.364406    5680 start.go:159] libmachine.API.Create for "old-k8s-version-120000" (driver="qemu2")
	I0719 12:09:06.364452    5680 client.go:168] LocalClient.Create starting
	I0719 12:09:06.364528    5680 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:09:06.364564    5680 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:06.364575    5680 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:06.364622    5680 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:09:06.364647    5680 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:06.364654    5680 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:06.364996    5680 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:09:06.511053    5680 main.go:141] libmachine: Creating SSH key...
	I0719 12:09:06.681754    5680 main.go:141] libmachine: Creating Disk image...
	I0719 12:09:06.681760    5680 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:09:06.681944    5680 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/disk.qcow2
	I0719 12:09:06.691546    5680 main.go:141] libmachine: STDOUT: 
	I0719 12:09:06.691565    5680 main.go:141] libmachine: STDERR: 
	I0719 12:09:06.691623    5680 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/disk.qcow2 +20000M
	I0719 12:09:06.699461    5680 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:09:06.699475    5680 main.go:141] libmachine: STDERR: 
	I0719 12:09:06.699493    5680 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/disk.qcow2
	I0719 12:09:06.699496    5680 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:09:06.699510    5680 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:09:06.699537    5680 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:eb:d6:55:68:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/disk.qcow2
	I0719 12:09:06.701162    5680 main.go:141] libmachine: STDOUT: 
	I0719 12:09:06.701178    5680 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:09:06.701198    5680 client.go:171] duration metric: took 336.740667ms to LocalClient.Create
	I0719 12:09:08.703335    5680 start.go:128] duration metric: took 2.362874041s to createHost
	I0719 12:09:08.703381    5680 start.go:83] releasing machines lock for "old-k8s-version-120000", held for 2.362984709s
	W0719 12:09:08.703438    5680 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:08.720478    5680 out.go:177] * Deleting "old-k8s-version-120000" in qemu2 ...
	W0719 12:09:08.747483    5680 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:08.747524    5680 start.go:729] Will try again in 5 seconds ...
	I0719 12:09:13.747694    5680 start.go:360] acquireMachinesLock for old-k8s-version-120000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:09:13.748127    5680 start.go:364] duration metric: took 359µs to acquireMachinesLock for "old-k8s-version-120000"
	I0719 12:09:13.748265    5680 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:09:13.748538    5680 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:09:13.765191    5680 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 12:09:13.814484    5680 start.go:159] libmachine.API.Create for "old-k8s-version-120000" (driver="qemu2")
	I0719 12:09:13.814525    5680 client.go:168] LocalClient.Create starting
	I0719 12:09:13.814631    5680 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:09:13.814684    5680 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:13.814702    5680 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:13.814765    5680 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:09:13.814795    5680 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:13.814804    5680 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:13.815343    5680 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:09:13.965720    5680 main.go:141] libmachine: Creating SSH key...
	I0719 12:09:14.059063    5680 main.go:141] libmachine: Creating Disk image...
	I0719 12:09:14.059076    5680 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:09:14.059292    5680 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/disk.qcow2
	I0719 12:09:14.068790    5680 main.go:141] libmachine: STDOUT: 
	I0719 12:09:14.068811    5680 main.go:141] libmachine: STDERR: 
	I0719 12:09:14.068869    5680 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/disk.qcow2 +20000M
	I0719 12:09:14.077567    5680 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:09:14.077588    5680 main.go:141] libmachine: STDERR: 
	I0719 12:09:14.077603    5680 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/disk.qcow2
	I0719 12:09:14.077608    5680 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:09:14.077618    5680 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:09:14.077658    5680 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:87:db:ba:96:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/disk.qcow2
	I0719 12:09:14.079493    5680 main.go:141] libmachine: STDOUT: 
	I0719 12:09:14.079507    5680 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:09:14.079520    5680 client.go:171] duration metric: took 264.994834ms to LocalClient.Create
	I0719 12:09:16.081588    5680 start.go:128] duration metric: took 2.333065167s to createHost
	I0719 12:09:16.081609    5680 start.go:83] releasing machines lock for "old-k8s-version-120000", held for 2.333495125s
	W0719 12:09:16.081693    5680 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-120000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-120000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:16.088446    5680 out.go:177] 
	W0719 12:09:16.093465    5680 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:09:16.093476    5680 out.go:239] * 
	* 
	W0719 12:09:16.094034    5680 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:09:16.104385    5680 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-120000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-120000 -n old-k8s-version-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-120000 -n old-k8s-version-120000: exit status 7 (35.019917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-371000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-371000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.85478275s)

                                                
                                                
-- stdout --
	* [no-preload-371000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-371000" primary control-plane node in "no-preload-371000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-371000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:09:15.948641    5795 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:09:15.948760    5795 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:15.948763    5795 out.go:304] Setting ErrFile to fd 2...
	I0719 12:09:15.948765    5795 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:15.948897    5795 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:09:15.949927    5795 out.go:298] Setting JSON to false
	I0719 12:09:15.966322    5795 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4118,"bootTime":1721412037,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 12:09:15.966395    5795 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 12:09:15.972581    5795 out.go:177] * [no-preload-371000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 12:09:15.979469    5795 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 12:09:15.979506    5795 notify.go:220] Checking for updates...
	I0719 12:09:15.987439    5795 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 12:09:15.991501    5795 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 12:09:15.994462    5795 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 12:09:15.997501    5795 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 12:09:16.000541    5795 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 12:09:16.003773    5795 config.go:182] Loaded profile config "multinode-281000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:09:16.003845    5795 config.go:182] Loaded profile config "old-k8s-version-120000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0719 12:09:16.003888    5795 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 12:09:16.008419    5795 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 12:09:16.015514    5795 start.go:297] selected driver: qemu2
	I0719 12:09:16.015522    5795 start.go:901] validating driver "qemu2" against <nil>
	I0719 12:09:16.015529    5795 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 12:09:16.017941    5795 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 12:09:16.020420    5795 out.go:177] * Automatically selected the socket_vmnet network
	I0719 12:09:16.023691    5795 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 12:09:16.023741    5795 cni.go:84] Creating CNI manager for ""
	I0719 12:09:16.023749    5795 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 12:09:16.023753    5795 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 12:09:16.023780    5795 start.go:340] cluster config:
	{Name:no-preload-371000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-371000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:09:16.027563    5795 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:16.033504    5795 out.go:177] * Starting "no-preload-371000" primary control-plane node in "no-preload-371000" cluster
	I0719 12:09:16.037487    5795 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 12:09:16.037590    5795 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/no-preload-371000/config.json ...
	I0719 12:09:16.037620    5795 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/no-preload-371000/config.json: {Name:mkf7d99c53ff8dcc0bb1e17bff9ecf3bddaa187c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:09:16.037616    5795 cache.go:107] acquiring lock: {Name:mkf3de4290b7ea2a2cf08483b15bdd55dce00d1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:16.037631    5795 cache.go:107] acquiring lock: {Name:mkf6bd33cdb36b07dd1b0fe94fd50d8f25b73886 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:16.037640    5795 cache.go:107] acquiring lock: {Name:mk8f55e98f823c2b250c8b2565d3e8c5f0eea927 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:16.037691    5795 cache.go:115] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0719 12:09:16.037699    5795 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 85.333µs
	I0719 12:09:16.037706    5795 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0719 12:09:16.037715    5795 cache.go:107] acquiring lock: {Name:mk06e2105666d8f631ecd16ab34fa68fea6df9b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:16.037819    5795 cache.go:107] acquiring lock: {Name:mk7c79ce13ccceb676f563f09d5be4a19b01132c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:16.037820    5795 cache.go:107] acquiring lock: {Name:mk8d79415151f82d57f407b8d548dedd858f822d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:16.037826    5795 cache.go:107] acquiring lock: {Name:mkb7da12256218042c3c5e2b2bdfaf0724450fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:16.037857    5795 cache.go:107] acquiring lock: {Name:mkc42c96a4965e2adc782ec31eee9629d116b544 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:16.037964    5795 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0719 12:09:16.037957    5795 start.go:360] acquireMachinesLock for no-preload-371000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:09:16.037966    5795 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 12:09:16.038013    5795 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0719 12:09:16.037992    5795 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 12:09:16.038064    5795 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 12:09:16.038205    5795 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 12:09:16.038213    5795 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 12:09:16.045871    5795 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 12:09:16.046029    5795 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 12:09:16.046482    5795 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 12:09:16.046592    5795 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0719 12:09:16.047518    5795 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 12:09:16.047555    5795 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 12:09:16.047652    5795 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0719 12:09:16.081678    5795 start.go:364] duration metric: took 43.690666ms to acquireMachinesLock for "no-preload-371000"
	I0719 12:09:16.081733    5795 start.go:93] Provisioning new machine with config: &{Name:no-preload-371000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-371000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:09:16.081791    5795 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:09:16.088485    5795 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 12:09:16.103917    5795 start.go:159] libmachine.API.Create for "no-preload-371000" (driver="qemu2")
	I0719 12:09:16.103942    5795 client.go:168] LocalClient.Create starting
	I0719 12:09:16.104006    5795 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:09:16.104036    5795 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:16.104045    5795 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:16.104079    5795 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:09:16.104104    5795 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:16.104112    5795 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:16.108885    5795 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:09:16.262773    5795 main.go:141] libmachine: Creating SSH key...
	I0719 12:09:16.319971    5795 main.go:141] libmachine: Creating Disk image...
	I0719 12:09:16.319990    5795 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:09:16.320187    5795 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/disk.qcow2
	I0719 12:09:16.330181    5795 main.go:141] libmachine: STDOUT: 
	I0719 12:09:16.330200    5795 main.go:141] libmachine: STDERR: 
	I0719 12:09:16.330275    5795 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/disk.qcow2 +20000M
	I0719 12:09:16.339737    5795 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:09:16.339755    5795 main.go:141] libmachine: STDERR: 
	I0719 12:09:16.339782    5795 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/disk.qcow2
	I0719 12:09:16.339788    5795 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:09:16.339803    5795 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:09:16.339831    5795 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:c9:c2:b8:1a:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/disk.qcow2
	I0719 12:09:16.341898    5795 main.go:141] libmachine: STDOUT: 
	I0719 12:09:16.341926    5795 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:09:16.341945    5795 client.go:171] duration metric: took 238.002041ms to LocalClient.Create
	I0719 12:09:16.432547    5795 cache.go:162] opening:  /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0719 12:09:16.449895    5795 cache.go:162] opening:  /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0719 12:09:16.461733    5795 cache.go:162] opening:  /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0719 12:09:16.467957    5795 cache.go:162] opening:  /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0719 12:09:16.497679    5795 cache.go:162] opening:  /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0719 12:09:16.580605    5795 cache.go:162] opening:  /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0719 12:09:16.583008    5795 cache.go:157] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0719 12:09:16.583023    5795 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 545.277291ms
	I0719 12:09:16.583044    5795 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0719 12:09:16.586745    5795 cache.go:162] opening:  /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0
	I0719 12:09:18.342180    5795 start.go:128] duration metric: took 2.260366208s to createHost
	I0719 12:09:18.342262    5795 start.go:83] releasing machines lock for "no-preload-371000", held for 2.260584s
	W0719 12:09:18.342313    5795 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:18.353246    5795 out.go:177] * Deleting "no-preload-371000" in qemu2 ...
	W0719 12:09:18.378610    5795 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:18.378651    5795 start.go:729] Will try again in 5 seconds ...
	I0719 12:09:19.511877    5795 cache.go:157] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0719 12:09:19.511931    5795 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 3.474201584s
	I0719 12:09:19.511960    5795 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0719 12:09:19.548807    5795 cache.go:157] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0719 12:09:19.548856    5795 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.51107375s
	I0719 12:09:19.548880    5795 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0719 12:09:20.476015    5795 cache.go:157] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0719 12:09:20.476071    5795 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 4.438515125s
	I0719 12:09:20.476102    5795 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0719 12:09:20.598616    5795 cache.go:157] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0719 12:09:20.598656    5795 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 4.561101958s
	I0719 12:09:20.598712    5795 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0719 12:09:21.234421    5795 cache.go:157] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0719 12:09:21.234479    5795 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 5.196778208s
	I0719 12:09:21.234506    5795 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0719 12:09:23.379252    5795 start.go:360] acquireMachinesLock for no-preload-371000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:09:23.379634    5795 start.go:364] duration metric: took 306.166µs to acquireMachinesLock for "no-preload-371000"
	I0719 12:09:23.379750    5795 start.go:93] Provisioning new machine with config: &{Name:no-preload-371000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-371000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:09:23.380047    5795 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:09:23.389664    5795 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 12:09:23.439404    5795 start.go:159] libmachine.API.Create for "no-preload-371000" (driver="qemu2")
	I0719 12:09:23.439445    5795 client.go:168] LocalClient.Create starting
	I0719 12:09:23.439580    5795 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:09:23.439641    5795 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:23.439662    5795 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:23.439736    5795 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:09:23.439786    5795 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:23.439802    5795 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:23.440286    5795 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:09:23.605051    5795 main.go:141] libmachine: Creating SSH key...
	I0719 12:09:23.685428    5795 main.go:141] libmachine: Creating Disk image...
	I0719 12:09:23.685434    5795 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:09:23.685608    5795 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/disk.qcow2
	I0719 12:09:23.695201    5795 main.go:141] libmachine: STDOUT: 
	I0719 12:09:23.695230    5795 main.go:141] libmachine: STDERR: 
	I0719 12:09:23.695284    5795 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/disk.qcow2 +20000M
	I0719 12:09:23.703314    5795 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:09:23.703329    5795 main.go:141] libmachine: STDERR: 
	I0719 12:09:23.703343    5795 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/disk.qcow2
	I0719 12:09:23.703347    5795 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:09:23.703357    5795 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:09:23.703390    5795 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:d6:8c:e1:a8:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/disk.qcow2
	I0719 12:09:23.705060    5795 main.go:141] libmachine: STDOUT: 
	I0719 12:09:23.705076    5795 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:09:23.705089    5795 client.go:171] duration metric: took 265.643167ms to LocalClient.Create
	I0719 12:09:23.801813    5795 cache.go:157] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0719 12:09:23.801829    5795 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 7.76421975s
	I0719 12:09:23.801837    5795 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0719 12:09:23.801853    5795 cache.go:87] Successfully saved all images to host disk.
	I0719 12:09:25.707315    5795 start.go:128] duration metric: took 2.327213666s to createHost
	I0719 12:09:25.707406    5795 start.go:83] releasing machines lock for "no-preload-371000", held for 2.32778s
	W0719 12:09:25.707819    5795 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-371000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-371000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:25.724356    5795 out.go:177] 
	W0719 12:09:25.733584    5795 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:09:25.733614    5795 out.go:239] * 
	* 
	W0719 12:09:25.736173    5795 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:09:25.748356    5795 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-371000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-371000 -n no-preload-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-371000 -n no-preload-371000: exit status 7 (63.552292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-120000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-120000 create -f testdata/busybox.yaml: exit status 1 (31.770334ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-120000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-120000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-120000 -n old-k8s-version-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-120000 -n old-k8s-version-120000: exit status 7 (34.450625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-120000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-120000 -n old-k8s-version-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-120000 -n old-k8s-version-120000: exit status 7 (35.721042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-120000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-120000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-120000 describe deploy/metrics-server -n kube-system: exit status 1 (30.047417ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-120000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-120000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-120000 -n old-k8s-version-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-120000 -n old-k8s-version-120000: exit status 7 (30.420875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (7.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-120000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-120000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (7.263244125s)

                                                
                                                
-- stdout --
	* [old-k8s-version-120000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-120000" primary control-plane node in "old-k8s-version-120000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-120000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-120000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:09:18.552496    5861 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:09:18.552622    5861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:18.552625    5861 out.go:304] Setting ErrFile to fd 2...
	I0719 12:09:18.552628    5861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:18.552758    5861 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:09:18.553820    5861 out.go:298] Setting JSON to false
	I0719 12:09:18.570154    5861 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4121,"bootTime":1721412037,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 12:09:18.570222    5861 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 12:09:18.575146    5861 out.go:177] * [old-k8s-version-120000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 12:09:18.582146    5861 notify.go:220] Checking for updates...
	I0719 12:09:18.586148    5861 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 12:09:18.594092    5861 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 12:09:18.601141    5861 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 12:09:18.609104    5861 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 12:09:18.617107    5861 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 12:09:18.624108    5861 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 12:09:18.628430    5861 config.go:182] Loaded profile config "old-k8s-version-120000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0719 12:09:18.632144    5861 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0719 12:09:18.635104    5861 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 12:09:18.638093    5861 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 12:09:18.646088    5861 start.go:297] selected driver: qemu2
	I0719 12:09:18.646093    5861 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:09:18.646147    5861 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 12:09:18.648583    5861 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 12:09:18.648613    5861 cni.go:84] Creating CNI manager for ""
	I0719 12:09:18.648620    5861 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 12:09:18.648653    5861 start.go:340] cluster config:
	{Name:old-k8s-version-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-120000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:09:18.652355    5861 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:18.659074    5861 out.go:177] * Starting "old-k8s-version-120000" primary control-plane node in "old-k8s-version-120000" cluster
	I0719 12:09:18.663114    5861 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 12:09:18.663132    5861 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0719 12:09:18.663139    5861 cache.go:56] Caching tarball of preloaded images
	I0719 12:09:18.663200    5861 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 12:09:18.663206    5861 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0719 12:09:18.663280    5861 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/old-k8s-version-120000/config.json ...
	I0719 12:09:18.663670    5861 start.go:360] acquireMachinesLock for old-k8s-version-120000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:09:18.663711    5861 start.go:364] duration metric: took 34.375µs to acquireMachinesLock for "old-k8s-version-120000"
	I0719 12:09:18.663719    5861 start.go:96] Skipping create...Using existing machine configuration
	I0719 12:09:18.663725    5861 fix.go:54] fixHost starting: 
	I0719 12:09:18.663846    5861 fix.go:112] recreateIfNeeded on old-k8s-version-120000: state=Stopped err=<nil>
	W0719 12:09:18.663854    5861 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 12:09:18.668094    5861 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-120000" ...
	I0719 12:09:18.675984    5861 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:09:18.676030    5861 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:87:db:ba:96:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/disk.qcow2
	I0719 12:09:18.677886    5861 main.go:141] libmachine: STDOUT: 
	I0719 12:09:18.677906    5861 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:09:18.677932    5861 fix.go:56] duration metric: took 14.208083ms for fixHost
	I0719 12:09:18.677935    5861 start.go:83] releasing machines lock for "old-k8s-version-120000", held for 14.22ms
	W0719 12:09:18.677942    5861 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:09:18.677978    5861 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:18.677983    5861 start.go:729] Will try again in 5 seconds ...
	I0719 12:09:23.679995    5861 start.go:360] acquireMachinesLock for old-k8s-version-120000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:09:25.707589    5861 start.go:364] duration metric: took 2.027585417s to acquireMachinesLock for "old-k8s-version-120000"
	I0719 12:09:25.707777    5861 start.go:96] Skipping create...Using existing machine configuration
	I0719 12:09:25.707803    5861 fix.go:54] fixHost starting: 
	I0719 12:09:25.708576    5861 fix.go:112] recreateIfNeeded on old-k8s-version-120000: state=Stopped err=<nil>
	W0719 12:09:25.708605    5861 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 12:09:25.729363    5861 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-120000" ...
	I0719 12:09:25.736336    5861 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:09:25.736543    5861 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:87:db:ba:96:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/old-k8s-version-120000/disk.qcow2
	I0719 12:09:25.745826    5861 main.go:141] libmachine: STDOUT: 
	I0719 12:09:25.745891    5861 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:09:25.745974    5861 fix.go:56] duration metric: took 38.173833ms for fixHost
	I0719 12:09:25.745990    5861 start.go:83] releasing machines lock for "old-k8s-version-120000", held for 38.343833ms
	W0719 12:09:25.746172    5861 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-120000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-120000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:25.760318    5861 out.go:177] 
	W0719 12:09:25.763383    5861 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:09:25.763455    5861 out.go:239] * 
	* 
	W0719 12:09:25.766011    5861 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:09:25.775326    5861 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-120000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-120000 -n old-k8s-version-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-120000 -n old-k8s-version-120000: exit status 7 (53.400709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (7.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-371000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-371000 create -f testdata/busybox.yaml: exit status 1 (30.601959ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-371000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-371000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-371000 -n no-preload-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-371000 -n no-preload-371000: exit status 7 (30.84075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-371000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-371000 -n no-preload-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-371000 -n no-preload-371000: exit status 7 (31.542375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-120000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-120000 -n old-k8s-version-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-120000 -n old-k8s-version-120000: exit status 7 (33.259125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-120000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-120000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-120000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.084625ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-120000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-120000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-120000 -n old-k8s-version-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-120000 -n old-k8s-version-120000: exit status 7 (29.607333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-371000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-371000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-371000 describe deploy/metrics-server -n kube-system: exit status 1 (28.705333ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-371000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-371000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-371000 -n no-preload-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-371000 -n no-preload-371000: exit status 7 (35.683833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-120000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-120000 -n old-k8s-version-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-120000 -n old-k8s-version-120000: exit status 7 (29.123ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-120000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-120000 --alsologtostderr -v=1: exit status 83 (47.323667ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-120000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-120000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:09:26.052041    5895 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:09:26.052425    5895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:26.052430    5895 out.go:304] Setting ErrFile to fd 2...
	I0719 12:09:26.052433    5895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:26.052585    5895 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:09:26.052806    5895 out.go:298] Setting JSON to false
	I0719 12:09:26.052814    5895 mustload.go:65] Loading cluster: old-k8s-version-120000
	I0719 12:09:26.053025    5895 config.go:182] Loaded profile config "old-k8s-version-120000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0719 12:09:26.057277    5895 out.go:177] * The control-plane node old-k8s-version-120000 host is not running: state=Stopped
	I0719 12:09:26.063676    5895 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-120000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-120000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-120000 -n old-k8s-version-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-120000 -n old-k8s-version-120000: exit status 7 (33.417708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-120000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-120000 -n old-k8s-version-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-120000 -n old-k8s-version-120000: exit status 7 (26.743458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-262000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-262000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.825446584s)

                                                
                                                
-- stdout --
	* [embed-certs-262000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-262000" primary control-plane node in "embed-certs-262000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-262000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:09:26.363230    5918 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:09:26.363352    5918 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:26.363355    5918 out.go:304] Setting ErrFile to fd 2...
	I0719 12:09:26.363358    5918 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:26.363475    5918 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:09:26.364691    5918 out.go:298] Setting JSON to false
	I0719 12:09:26.381522    5918 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4129,"bootTime":1721412037,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 12:09:26.381590    5918 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 12:09:26.386375    5918 out.go:177] * [embed-certs-262000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 12:09:26.393311    5918 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 12:09:26.393358    5918 notify.go:220] Checking for updates...
	I0719 12:09:26.400293    5918 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 12:09:26.403293    5918 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 12:09:26.410285    5918 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 12:09:26.413355    5918 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 12:09:26.416329    5918 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 12:09:26.419538    5918 config.go:182] Loaded profile config "multinode-281000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:09:26.419604    5918 config.go:182] Loaded profile config "no-preload-371000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0719 12:09:26.419657    5918 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 12:09:26.424322    5918 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 12:09:26.431229    5918 start.go:297] selected driver: qemu2
	I0719 12:09:26.431235    5918 start.go:901] validating driver "qemu2" against <nil>
	I0719 12:09:26.431240    5918 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 12:09:26.433631    5918 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 12:09:26.436329    5918 out.go:177] * Automatically selected the socket_vmnet network
	I0719 12:09:26.439389    5918 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 12:09:26.439425    5918 cni.go:84] Creating CNI manager for ""
	I0719 12:09:26.439434    5918 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 12:09:26.439438    5918 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 12:09:26.439467    5918 start.go:340] cluster config:
	{Name:embed-certs-262000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:09:26.443168    5918 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:26.450300    5918 out.go:177] * Starting "embed-certs-262000" primary control-plane node in "embed-certs-262000" cluster
	I0719 12:09:26.454197    5918 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 12:09:26.454213    5918 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 12:09:26.454222    5918 cache.go:56] Caching tarball of preloaded images
	I0719 12:09:26.454290    5918 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 12:09:26.454296    5918 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 12:09:26.454370    5918 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/embed-certs-262000/config.json ...
	I0719 12:09:26.454383    5918 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/embed-certs-262000/config.json: {Name:mkbc99c3a4ca1ac90eedfa5a4cbf900ffc3cc826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:09:26.454781    5918 start.go:360] acquireMachinesLock for embed-certs-262000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:09:26.454823    5918 start.go:364] duration metric: took 33.416µs to acquireMachinesLock for "embed-certs-262000"
	I0719 12:09:26.454835    5918 start.go:93] Provisioning new machine with config: &{Name:embed-certs-262000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:09:26.454866    5918 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:09:26.459361    5918 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 12:09:26.476967    5918 start.go:159] libmachine.API.Create for "embed-certs-262000" (driver="qemu2")
	I0719 12:09:26.476993    5918 client.go:168] LocalClient.Create starting
	I0719 12:09:26.477054    5918 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:09:26.477088    5918 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:26.477099    5918 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:26.477136    5918 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:09:26.477160    5918 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:26.477174    5918 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:26.477544    5918 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:09:26.619589    5918 main.go:141] libmachine: Creating SSH key...
	I0719 12:09:26.781414    5918 main.go:141] libmachine: Creating Disk image...
	I0719 12:09:26.781429    5918 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:09:26.781644    5918 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/disk.qcow2
	I0719 12:09:26.791113    5918 main.go:141] libmachine: STDOUT: 
	I0719 12:09:26.791133    5918 main.go:141] libmachine: STDERR: 
	I0719 12:09:26.791186    5918 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/disk.qcow2 +20000M
	I0719 12:09:26.799083    5918 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:09:26.799096    5918 main.go:141] libmachine: STDERR: 
	I0719 12:09:26.799114    5918 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/disk.qcow2
	I0719 12:09:26.799119    5918 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:09:26.799130    5918 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:09:26.799156    5918 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:ec:ce:7c:85:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/disk.qcow2
	I0719 12:09:26.800779    5918 main.go:141] libmachine: STDOUT: 
	I0719 12:09:26.800796    5918 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:09:26.800814    5918 client.go:171] duration metric: took 323.821084ms to LocalClient.Create
	I0719 12:09:28.802959    5918 start.go:128] duration metric: took 2.348096458s to createHost
	I0719 12:09:28.803032    5918 start.go:83] releasing machines lock for "embed-certs-262000", held for 2.3482325s
	W0719 12:09:28.803172    5918 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:28.816762    5918 out.go:177] * Deleting "embed-certs-262000" in qemu2 ...
	W0719 12:09:28.840162    5918 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:28.840191    5918 start.go:729] Will try again in 5 seconds ...
	I0719 12:09:33.842387    5918 start.go:360] acquireMachinesLock for embed-certs-262000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:09:33.842884    5918 start.go:364] duration metric: took 364.25µs to acquireMachinesLock for "embed-certs-262000"
	I0719 12:09:33.843007    5918 start.go:93] Provisioning new machine with config: &{Name:embed-certs-262000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:09:33.843268    5918 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:09:33.853023    5918 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 12:09:33.902444    5918 start.go:159] libmachine.API.Create for "embed-certs-262000" (driver="qemu2")
	I0719 12:09:33.902495    5918 client.go:168] LocalClient.Create starting
	I0719 12:09:33.902603    5918 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:09:33.902669    5918 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:33.902688    5918 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:33.902749    5918 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:09:33.902793    5918 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:33.902806    5918 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:33.903384    5918 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:09:34.053257    5918 main.go:141] libmachine: Creating SSH key...
	I0719 12:09:34.091910    5918 main.go:141] libmachine: Creating Disk image...
	I0719 12:09:34.091918    5918 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:09:34.092093    5918 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/disk.qcow2
	I0719 12:09:34.101156    5918 main.go:141] libmachine: STDOUT: 
	I0719 12:09:34.101173    5918 main.go:141] libmachine: STDERR: 
	I0719 12:09:34.101232    5918 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/disk.qcow2 +20000M
	I0719 12:09:34.109016    5918 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:09:34.109029    5918 main.go:141] libmachine: STDERR: 
	I0719 12:09:34.109041    5918 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/disk.qcow2
	I0719 12:09:34.109047    5918 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:09:34.109057    5918 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:09:34.109081    5918 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:e4:6c:85:b8:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/disk.qcow2
	I0719 12:09:34.110684    5918 main.go:141] libmachine: STDOUT: 
	I0719 12:09:34.110699    5918 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:09:34.110709    5918 client.go:171] duration metric: took 208.210542ms to LocalClient.Create
	I0719 12:09:36.112858    5918 start.go:128] duration metric: took 2.269590292s to createHost
	I0719 12:09:36.112975    5918 start.go:83] releasing machines lock for "embed-certs-262000", held for 2.270091625s
	W0719 12:09:36.113283    5918 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:36.127804    5918 out.go:177] 
	W0719 12:09:36.135973    5918 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:09:36.136012    5918 out.go:239] * 
	* 
	W0719 12:09:36.138477    5918 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:09:36.145807    5918 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-262000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-262000 -n embed-certs-262000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-262000 -n embed-certs-262000: exit status 7 (65.278417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-262000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (6.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-371000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-371000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (6.473545167s)

                                                
                                                
-- stdout --
	* [no-preload-371000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-371000" primary control-plane node in "no-preload-371000" cluster
	* Restarting existing qemu2 VM for "no-preload-371000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-371000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:09:29.745589    5948 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:09:29.745710    5948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:29.745713    5948 out.go:304] Setting ErrFile to fd 2...
	I0719 12:09:29.745715    5948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:29.745843    5948 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:09:29.746857    5948 out.go:298] Setting JSON to false
	I0719 12:09:29.762978    5948 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4132,"bootTime":1721412037,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 12:09:29.763048    5948 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 12:09:29.767633    5948 out.go:177] * [no-preload-371000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 12:09:29.774629    5948 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 12:09:29.774688    5948 notify.go:220] Checking for updates...
	I0719 12:09:29.781579    5948 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 12:09:29.784589    5948 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 12:09:29.787617    5948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 12:09:29.790593    5948 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 12:09:29.793606    5948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 12:09:29.796777    5948 config.go:182] Loaded profile config "no-preload-371000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0719 12:09:29.797071    5948 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 12:09:29.801589    5948 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 12:09:29.808539    5948 start.go:297] selected driver: qemu2
	I0719 12:09:29.808545    5948 start.go:901] validating driver "qemu2" against &{Name:no-preload-371000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-371000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:09:29.808613    5948 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 12:09:29.811001    5948 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 12:09:29.811036    5948 cni.go:84] Creating CNI manager for ""
	I0719 12:09:29.811042    5948 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 12:09:29.811068    5948 start.go:340] cluster config:
	{Name:no-preload-371000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-371000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:09:29.814481    5948 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:29.822414    5948 out.go:177] * Starting "no-preload-371000" primary control-plane node in "no-preload-371000" cluster
	I0719 12:09:29.826561    5948 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 12:09:29.826619    5948 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/no-preload-371000/config.json ...
	I0719 12:09:29.826651    5948 cache.go:107] acquiring lock: {Name:mkf3de4290b7ea2a2cf08483b15bdd55dce00d1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:29.826654    5948 cache.go:107] acquiring lock: {Name:mkb7da12256218042c3c5e2b2bdfaf0724450fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:29.826674    5948 cache.go:107] acquiring lock: {Name:mk8d79415151f82d57f407b8d548dedd858f822d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:29.826702    5948 cache.go:115] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0719 12:09:29.826708    5948 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 58.166µs
	I0719 12:09:29.826713    5948 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0719 12:09:29.826723    5948 cache.go:107] acquiring lock: {Name:mkc42c96a4965e2adc782ec31eee9629d116b544 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:29.826732    5948 cache.go:115] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0719 12:09:29.826745    5948 cache.go:107] acquiring lock: {Name:mk8f55e98f823c2b250c8b2565d3e8c5f0eea927 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:29.826756    5948 cache.go:107] acquiring lock: {Name:mk7c79ce13ccceb676f563f09d5be4a19b01132c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:29.826784    5948 cache.go:115] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0719 12:09:29.826788    5948 cache.go:115] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0719 12:09:29.826790    5948 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 45.75µs
	I0719 12:09:29.826792    5948 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 69.5µs
	I0719 12:09:29.826795    5948 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0719 12:09:29.826796    5948 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0719 12:09:29.826747    5948 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 73.375µs
	I0719 12:09:29.826821    5948 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0719 12:09:29.826736    5948 cache.go:115] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0719 12:09:29.826826    5948 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 177.25µs
	I0719 12:09:29.826829    5948 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0719 12:09:29.826660    5948 cache.go:107] acquiring lock: {Name:mk06e2105666d8f631ecd16ab34fa68fea6df9b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:29.826800    5948 cache.go:115] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0719 12:09:29.826849    5948 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 92.75µs
	I0719 12:09:29.826853    5948 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0719 12:09:29.826808    5948 cache.go:107] acquiring lock: {Name:mkf6bd33cdb36b07dd1b0fe94fd50d8f25b73886 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:29.826860    5948 cache.go:115] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0719 12:09:29.826863    5948 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 212.083µs
	I0719 12:09:29.826866    5948 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0719 12:09:29.826884    5948 cache.go:115] /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0719 12:09:29.826889    5948 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 81.625µs
	I0719 12:09:29.826892    5948 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0719 12:09:29.826897    5948 cache.go:87] Successfully saved all images to host disk.
	I0719 12:09:29.827021    5948 start.go:360] acquireMachinesLock for no-preload-371000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:09:29.827059    5948 start.go:364] duration metric: took 31.959µs to acquireMachinesLock for "no-preload-371000"
	I0719 12:09:29.827068    5948 start.go:96] Skipping create...Using existing machine configuration
	I0719 12:09:29.827073    5948 fix.go:54] fixHost starting: 
	I0719 12:09:29.827191    5948 fix.go:112] recreateIfNeeded on no-preload-371000: state=Stopped err=<nil>
	W0719 12:09:29.827201    5948 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 12:09:29.834568    5948 out.go:177] * Restarting existing qemu2 VM for "no-preload-371000" ...
	I0719 12:09:29.838643    5948 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:09:29.838703    5948 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:d6:8c:e1:a8:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/disk.qcow2
	I0719 12:09:29.840720    5948 main.go:141] libmachine: STDOUT: 
	I0719 12:09:29.840737    5948 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:09:29.840761    5948 fix.go:56] duration metric: took 13.688875ms for fixHost
	I0719 12:09:29.840766    5948 start.go:83] releasing machines lock for "no-preload-371000", held for 13.702167ms
	W0719 12:09:29.840773    5948 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:09:29.840803    5948 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:29.840807    5948 start.go:729] Will try again in 5 seconds ...
	I0719 12:09:34.842937    5948 start.go:360] acquireMachinesLock for no-preload-371000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:09:36.113153    5948 start.go:364] duration metric: took 1.270075833s to acquireMachinesLock for "no-preload-371000"
	I0719 12:09:36.113345    5948 start.go:96] Skipping create...Using existing machine configuration
	I0719 12:09:36.113364    5948 fix.go:54] fixHost starting: 
	I0719 12:09:36.114054    5948 fix.go:112] recreateIfNeeded on no-preload-371000: state=Stopped err=<nil>
	W0719 12:09:36.114114    5948 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 12:09:36.131651    5948 out.go:177] * Restarting existing qemu2 VM for "no-preload-371000" ...
	I0719 12:09:36.138763    5948 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:09:36.138970    5948 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:d6:8c:e1:a8:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/no-preload-371000/disk.qcow2
	I0719 12:09:36.148388    5948 main.go:141] libmachine: STDOUT: 
	I0719 12:09:36.148441    5948 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:09:36.148520    5948 fix.go:56] duration metric: took 35.159667ms for fixHost
	I0719 12:09:36.148540    5948 start.go:83] releasing machines lock for "no-preload-371000", held for 35.342125ms
	W0719 12:09:36.148701    5948 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-371000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-371000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:36.164930    5948 out.go:177] 
	W0719 12:09:36.167192    5948 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:09:36.167251    5948 out.go:239] * 
	* 
	W0719 12:09:36.169358    5948 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:09:36.185768    5948 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-371000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-371000 -n no-preload-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-371000 -n no-preload-371000: exit status 7 (45.253458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (6.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-262000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-262000 create -f testdata/busybox.yaml: exit status 1 (30.763209ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-262000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-262000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-262000 -n embed-certs-262000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-262000 -n embed-certs-262000: exit status 7 (31.569708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-262000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-262000 -n embed-certs-262000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-262000 -n embed-certs-262000: exit status 7 (33.902792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-262000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-371000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-371000 -n no-preload-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-371000 -n no-preload-371000: exit status 7 (32.360291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-371000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-371000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-371000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.233625ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-371000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-371000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-371000 -n no-preload-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-371000 -n no-preload-371000: exit status 7 (31.092625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-262000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-262000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-262000 describe deploy/metrics-server -n kube-system: exit status 1 (28.594875ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-262000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-262000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-262000 -n embed-certs-262000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-262000 -n embed-certs-262000: exit status 7 (31.580792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-262000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-371000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-371000 -n no-preload-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-371000 -n no-preload-371000: exit status 7 (29.247167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-371000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-371000 --alsologtostderr -v=1: exit status 83 (48.751542ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-371000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-371000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:09:36.435684    5981 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:09:36.435855    5981 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:36.435859    5981 out.go:304] Setting ErrFile to fd 2...
	I0719 12:09:36.435862    5981 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:36.435993    5981 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:09:36.436293    5981 out.go:298] Setting JSON to false
	I0719 12:09:36.436300    5981 mustload.go:65] Loading cluster: no-preload-371000
	I0719 12:09:36.436490    5981 config.go:182] Loaded profile config "no-preload-371000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0719 12:09:36.440847    5981 out.go:177] * The control-plane node no-preload-371000 host is not running: state=Stopped
	I0719 12:09:36.448739    5981 out.go:177]   To start a cluster, run: "minikube start -p no-preload-371000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-371000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-371000 -n no-preload-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-371000 -n no-preload-371000: exit status 7 (31.576708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-371000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-371000 -n no-preload-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-371000 -n no-preload-371000: exit status 7 (27.180916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-747000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-747000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.80253575s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-747000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-747000" primary control-plane node in "default-k8s-diff-port-747000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-747000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:09:36.846348    6013 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:09:36.846464    6013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:36.846467    6013 out.go:304] Setting ErrFile to fd 2...
	I0719 12:09:36.846470    6013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:36.846594    6013 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:09:36.847686    6013 out.go:298] Setting JSON to false
	I0719 12:09:36.864096    6013 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4139,"bootTime":1721412037,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 12:09:36.864159    6013 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 12:09:36.868852    6013 out.go:177] * [default-k8s-diff-port-747000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 12:09:36.875777    6013 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 12:09:36.875842    6013 notify.go:220] Checking for updates...
	I0719 12:09:36.879758    6013 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 12:09:36.882790    6013 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 12:09:36.885838    6013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 12:09:36.888827    6013 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 12:09:36.891778    6013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 12:09:36.895137    6013 config.go:182] Loaded profile config "embed-certs-262000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:09:36.895200    6013 config.go:182] Loaded profile config "multinode-281000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:09:36.895258    6013 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 12:09:36.899673    6013 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 12:09:36.906783    6013 start.go:297] selected driver: qemu2
	I0719 12:09:36.906794    6013 start.go:901] validating driver "qemu2" against <nil>
	I0719 12:09:36.906807    6013 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 12:09:36.909213    6013 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 12:09:36.911762    6013 out.go:177] * Automatically selected the socket_vmnet network
	I0719 12:09:36.914871    6013 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 12:09:36.914911    6013 cni.go:84] Creating CNI manager for ""
	I0719 12:09:36.914919    6013 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 12:09:36.914924    6013 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 12:09:36.914948    6013 start.go:340] cluster config:
	{Name:default-k8s-diff-port-747000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-747000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:09:36.918597    6013 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:36.923733    6013 out.go:177] * Starting "default-k8s-diff-port-747000" primary control-plane node in "default-k8s-diff-port-747000" cluster
	I0719 12:09:36.927776    6013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 12:09:36.927789    6013 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 12:09:36.927800    6013 cache.go:56] Caching tarball of preloaded images
	I0719 12:09:36.927868    6013 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 12:09:36.927874    6013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 12:09:36.927938    6013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/default-k8s-diff-port-747000/config.json ...
	I0719 12:09:36.927951    6013 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/default-k8s-diff-port-747000/config.json: {Name:mke4f14c0a96fb16b08a0d1b66bb1f019b3ceb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:09:36.928286    6013 start.go:360] acquireMachinesLock for default-k8s-diff-port-747000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:09:36.928321    6013 start.go:364] duration metric: took 27.083µs to acquireMachinesLock for "default-k8s-diff-port-747000"
	I0719 12:09:36.928331    6013 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-747000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:09:36.928362    6013 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:09:36.932802    6013 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 12:09:36.950104    6013 start.go:159] libmachine.API.Create for "default-k8s-diff-port-747000" (driver="qemu2")
	I0719 12:09:36.950135    6013 client.go:168] LocalClient.Create starting
	I0719 12:09:36.950216    6013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:09:36.950251    6013 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:36.950259    6013 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:36.950296    6013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:09:36.950318    6013 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:36.950325    6013 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:36.950772    6013 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:09:37.093738    6013 main.go:141] libmachine: Creating SSH key...
	I0719 12:09:37.134726    6013 main.go:141] libmachine: Creating Disk image...
	I0719 12:09:37.134731    6013 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:09:37.134891    6013 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/disk.qcow2
	I0719 12:09:37.143902    6013 main.go:141] libmachine: STDOUT: 
	I0719 12:09:37.143919    6013 main.go:141] libmachine: STDERR: 
	I0719 12:09:37.143976    6013 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/disk.qcow2 +20000M
	I0719 12:09:37.151727    6013 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:09:37.151741    6013 main.go:141] libmachine: STDERR: 
	I0719 12:09:37.151755    6013 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/disk.qcow2
	I0719 12:09:37.151760    6013 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:09:37.151772    6013 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:09:37.151800    6013 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:07:97:da:36:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/disk.qcow2
	I0719 12:09:37.153374    6013 main.go:141] libmachine: STDOUT: 
	I0719 12:09:37.153389    6013 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:09:37.153405    6013 client.go:171] duration metric: took 203.269459ms to LocalClient.Create
	I0719 12:09:39.155576    6013 start.go:128] duration metric: took 2.22722225s to createHost
	I0719 12:09:39.155641    6013 start.go:83] releasing machines lock for "default-k8s-diff-port-747000", held for 2.227341083s
	W0719 12:09:39.155691    6013 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:39.171073    6013 out.go:177] * Deleting "default-k8s-diff-port-747000" in qemu2 ...
	W0719 12:09:39.197007    6013 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:39.197060    6013 start.go:729] Will try again in 5 seconds ...
	I0719 12:09:44.197329    6013 start.go:360] acquireMachinesLock for default-k8s-diff-port-747000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:09:44.197924    6013 start.go:364] duration metric: took 485.416µs to acquireMachinesLock for "default-k8s-diff-port-747000"
	I0719 12:09:44.198041    6013 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-747000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:09:44.198292    6013 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:09:44.207957    6013 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 12:09:44.260936    6013 start.go:159] libmachine.API.Create for "default-k8s-diff-port-747000" (driver="qemu2")
	I0719 12:09:44.260991    6013 client.go:168] LocalClient.Create starting
	I0719 12:09:44.261103    6013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:09:44.261176    6013 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:44.261190    6013 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:44.261256    6013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:09:44.261300    6013 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:44.261316    6013 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:44.261879    6013 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:09:44.413935    6013 main.go:141] libmachine: Creating SSH key...
	I0719 12:09:44.539081    6013 main.go:141] libmachine: Creating Disk image...
	I0719 12:09:44.539086    6013 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:09:44.539243    6013 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/disk.qcow2
	I0719 12:09:44.548348    6013 main.go:141] libmachine: STDOUT: 
	I0719 12:09:44.548374    6013 main.go:141] libmachine: STDERR: 
	I0719 12:09:44.548425    6013 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/disk.qcow2 +20000M
	I0719 12:09:44.556396    6013 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:09:44.556410    6013 main.go:141] libmachine: STDERR: 
	I0719 12:09:44.556423    6013 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/disk.qcow2
	I0719 12:09:44.556427    6013 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:09:44.556437    6013 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:09:44.556476    6013 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:0e:2b:bd:ff:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/disk.qcow2
	I0719 12:09:44.558123    6013 main.go:141] libmachine: STDOUT: 
	I0719 12:09:44.558137    6013 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:09:44.558150    6013 client.go:171] duration metric: took 297.157709ms to LocalClient.Create
	I0719 12:09:46.560307    6013 start.go:128] duration metric: took 2.362018542s to createHost
	I0719 12:09:46.560442    6013 start.go:83] releasing machines lock for "default-k8s-diff-port-747000", held for 2.362468208s
	W0719 12:09:46.560768    6013 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-747000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-747000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:46.568438    6013 out.go:177] 
	W0719 12:09:46.579448    6013 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:09:46.579488    6013 out.go:239] * 
	* 
	W0719 12:09:46.581944    6013 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:09:46.595362    6013 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-747000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-747000 -n default-k8s-diff-port-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-747000 -n default-k8s-diff-port-747000: exit status 7 (68.776084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-747000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-262000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-262000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (6.441067708s)

                                                
                                                
-- stdout --
	* [embed-certs-262000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-262000" primary control-plane node in "embed-certs-262000" cluster
	* Restarting existing qemu2 VM for "embed-certs-262000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-262000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:09:40.225227    6039 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:09:40.225352    6039 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:40.225355    6039 out.go:304] Setting ErrFile to fd 2...
	I0719 12:09:40.225358    6039 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:40.225498    6039 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:09:40.226486    6039 out.go:298] Setting JSON to false
	I0719 12:09:40.242545    6039 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4143,"bootTime":1721412037,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 12:09:40.242620    6039 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 12:09:40.246935    6039 out.go:177] * [embed-certs-262000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 12:09:40.253849    6039 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 12:09:40.253932    6039 notify.go:220] Checking for updates...
	I0719 12:09:40.260833    6039 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 12:09:40.263861    6039 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 12:09:40.266923    6039 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 12:09:40.269862    6039 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 12:09:40.272818    6039 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 12:09:40.281151    6039 config.go:182] Loaded profile config "embed-certs-262000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:09:40.281429    6039 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 12:09:40.285851    6039 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 12:09:40.292825    6039 start.go:297] selected driver: qemu2
	I0719 12:09:40.292832    6039 start.go:901] validating driver "qemu2" against &{Name:embed-certs-262000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:09:40.292884    6039 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 12:09:40.295367    6039 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 12:09:40.295410    6039 cni.go:84] Creating CNI manager for ""
	I0719 12:09:40.295418    6039 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 12:09:40.295443    6039 start.go:340] cluster config:
	{Name:embed-certs-262000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-262000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:09:40.299174    6039 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:40.306853    6039 out.go:177] * Starting "embed-certs-262000" primary control-plane node in "embed-certs-262000" cluster
	I0719 12:09:40.310822    6039 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 12:09:40.310838    6039 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 12:09:40.310853    6039 cache.go:56] Caching tarball of preloaded images
	I0719 12:09:40.310929    6039 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 12:09:40.310935    6039 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 12:09:40.311012    6039 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/embed-certs-262000/config.json ...
	I0719 12:09:40.311492    6039 start.go:360] acquireMachinesLock for embed-certs-262000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:09:40.311531    6039 start.go:364] duration metric: took 32.25µs to acquireMachinesLock for "embed-certs-262000"
	I0719 12:09:40.311542    6039 start.go:96] Skipping create...Using existing machine configuration
	I0719 12:09:40.311549    6039 fix.go:54] fixHost starting: 
	I0719 12:09:40.311685    6039 fix.go:112] recreateIfNeeded on embed-certs-262000: state=Stopped err=<nil>
	W0719 12:09:40.311695    6039 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 12:09:40.318811    6039 out.go:177] * Restarting existing qemu2 VM for "embed-certs-262000" ...
	I0719 12:09:40.322827    6039 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:09:40.322887    6039 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:e4:6c:85:b8:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/disk.qcow2
	I0719 12:09:40.324993    6039 main.go:141] libmachine: STDOUT: 
	I0719 12:09:40.325014    6039 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:09:40.325044    6039 fix.go:56] duration metric: took 13.494708ms for fixHost
	I0719 12:09:40.325050    6039 start.go:83] releasing machines lock for "embed-certs-262000", held for 13.512125ms
	W0719 12:09:40.325055    6039 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:09:40.325089    6039 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:40.325094    6039 start.go:729] Will try again in 5 seconds ...
	I0719 12:09:45.327303    6039 start.go:360] acquireMachinesLock for embed-certs-262000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:09:46.560626    6039 start.go:364] duration metric: took 1.233173666s to acquireMachinesLock for "embed-certs-262000"
	I0719 12:09:46.560751    6039 start.go:96] Skipping create...Using existing machine configuration
	I0719 12:09:46.560772    6039 fix.go:54] fixHost starting: 
	I0719 12:09:46.561522    6039 fix.go:112] recreateIfNeeded on embed-certs-262000: state=Stopped err=<nil>
	W0719 12:09:46.561554    6039 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 12:09:46.576354    6039 out.go:177] * Restarting existing qemu2 VM for "embed-certs-262000" ...
	I0719 12:09:46.580632    6039 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:09:46.580857    6039 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:e4:6c:85:b8:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/embed-certs-262000/disk.qcow2
	I0719 12:09:46.590296    6039 main.go:141] libmachine: STDOUT: 
	I0719 12:09:46.590351    6039 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:09:46.590432    6039 fix.go:56] duration metric: took 29.662208ms for fixHost
	I0719 12:09:46.590451    6039 start.go:83] releasing machines lock for "embed-certs-262000", held for 29.791583ms
	W0719 12:09:46.590602    6039 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-262000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-262000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:46.603345    6039 out.go:177] 
	W0719 12:09:46.611317    6039 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:09:46.611352    6039 out.go:239] * 
	* 
	W0719 12:09:46.614191    6039 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:09:46.625432    6039 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-262000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-262000 -n embed-certs-262000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-262000 -n embed-certs-262000: exit status 7 (57.545583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-262000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-747000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-747000 create -f testdata/busybox.yaml: exit status 1 (31.949834ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-747000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-747000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-747000 -n default-k8s-diff-port-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-747000 -n default-k8s-diff-port-747000: exit status 7 (31.057625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-747000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-747000 -n default-k8s-diff-port-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-747000 -n default-k8s-diff-port-747000: exit status 7 (33.523583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-747000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-262000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-262000 -n embed-certs-262000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-262000 -n embed-certs-262000: exit status 7 (34.164584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-262000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-262000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-262000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-262000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.199625ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-262000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-262000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-262000 -n embed-certs-262000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-262000 -n embed-certs-262000: exit status 7 (29.017542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-262000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-747000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-747000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-747000 describe deploy/metrics-server -n kube-system: exit status 1 (28.817375ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-747000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-747000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-747000 -n default-k8s-diff-port-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-747000 -n default-k8s-diff-port-747000: exit status 7 (29.770084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-747000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-262000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-262000 -n embed-certs-262000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-262000 -n embed-certs-262000: exit status 7 (29.990417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-262000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-262000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-262000 --alsologtostderr -v=1: exit status 83 (48.447625ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-262000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-262000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:09:46.904128    6075 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:09:46.904258    6075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:46.904261    6075 out.go:304] Setting ErrFile to fd 2...
	I0719 12:09:46.904263    6075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:46.904411    6075 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:09:46.904661    6075 out.go:298] Setting JSON to false
	I0719 12:09:46.904667    6075 mustload.go:65] Loading cluster: embed-certs-262000
	I0719 12:09:46.904872    6075 config.go:182] Loaded profile config "embed-certs-262000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:09:46.909177    6075 out.go:177] * The control-plane node embed-certs-262000 host is not running: state=Stopped
	I0719 12:09:46.916086    6075 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-262000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-262000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-262000 -n embed-certs-262000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-262000 -n embed-certs-262000: exit status 7 (29.1225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-262000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-262000 -n embed-certs-262000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-262000 -n embed-certs-262000: exit status 7 (27.486541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-262000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-090000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-090000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.814703s)

                                                
                                                
-- stdout --
	* [newest-cni-090000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-090000" primary control-plane node in "newest-cni-090000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-090000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:09:47.213975    6100 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:09:47.214100    6100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:47.214103    6100 out.go:304] Setting ErrFile to fd 2...
	I0719 12:09:47.214106    6100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:47.214238    6100 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:09:47.215328    6100 out.go:298] Setting JSON to false
	I0719 12:09:47.231493    6100 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4150,"bootTime":1721412037,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 12:09:47.231598    6100 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 12:09:47.233903    6100 out.go:177] * [newest-cni-090000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 12:09:47.241194    6100 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 12:09:47.241267    6100 notify.go:220] Checking for updates...
	I0719 12:09:47.247159    6100 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 12:09:47.250237    6100 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 12:09:47.251712    6100 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 12:09:47.255221    6100 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 12:09:47.258207    6100 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 12:09:47.261659    6100 config.go:182] Loaded profile config "default-k8s-diff-port-747000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:09:47.261719    6100 config.go:182] Loaded profile config "multinode-281000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:09:47.261772    6100 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 12:09:47.266117    6100 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 12:09:47.273212    6100 start.go:297] selected driver: qemu2
	I0719 12:09:47.273218    6100 start.go:901] validating driver "qemu2" against <nil>
	I0719 12:09:47.273225    6100 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 12:09:47.275632    6100 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0719 12:09:47.275688    6100 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0719 12:09:47.284200    6100 out.go:177] * Automatically selected the socket_vmnet network
	I0719 12:09:47.287306    6100 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0719 12:09:47.287353    6100 cni.go:84] Creating CNI manager for ""
	I0719 12:09:47.287362    6100 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 12:09:47.287366    6100 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 12:09:47.287385    6100 start.go:340] cluster config:
	{Name:newest-cni-090000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-090000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:09:47.291165    6100 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:47.299247    6100 out.go:177] * Starting "newest-cni-090000" primary control-plane node in "newest-cni-090000" cluster
	I0719 12:09:47.303162    6100 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 12:09:47.303180    6100 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0719 12:09:47.303199    6100 cache.go:56] Caching tarball of preloaded images
	I0719 12:09:47.303278    6100 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 12:09:47.303290    6100 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0719 12:09:47.303362    6100 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/newest-cni-090000/config.json ...
	I0719 12:09:47.303374    6100 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/newest-cni-090000/config.json: {Name:mk48fcc22dade146b23e3b270d1508ffcc529b47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 12:09:47.303590    6100 start.go:360] acquireMachinesLock for newest-cni-090000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:09:47.303626    6100 start.go:364] duration metric: took 30.459µs to acquireMachinesLock for "newest-cni-090000"
	I0719 12:09:47.303638    6100 start.go:93] Provisioning new machine with config: &{Name:newest-cni-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-090000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:09:47.303669    6100 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:09:47.311161    6100 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 12:09:47.328718    6100 start.go:159] libmachine.API.Create for "newest-cni-090000" (driver="qemu2")
	I0719 12:09:47.328746    6100 client.go:168] LocalClient.Create starting
	I0719 12:09:47.328807    6100 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:09:47.328838    6100 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:47.328847    6100 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:47.328888    6100 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:09:47.328919    6100 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:47.328926    6100 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:47.329278    6100 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:09:47.470777    6100 main.go:141] libmachine: Creating SSH key...
	I0719 12:09:47.580349    6100 main.go:141] libmachine: Creating Disk image...
	I0719 12:09:47.580355    6100 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:09:47.580522    6100 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/disk.qcow2
	I0719 12:09:47.589834    6100 main.go:141] libmachine: STDOUT: 
	I0719 12:09:47.589862    6100 main.go:141] libmachine: STDERR: 
	I0719 12:09:47.589910    6100 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/disk.qcow2 +20000M
	I0719 12:09:47.597876    6100 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:09:47.597891    6100 main.go:141] libmachine: STDERR: 
	I0719 12:09:47.597900    6100 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/disk.qcow2
	I0719 12:09:47.597907    6100 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:09:47.597919    6100 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:09:47.597945    6100 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:d9:5f:c9:b1:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/disk.qcow2
	I0719 12:09:47.599572    6100 main.go:141] libmachine: STDOUT: 
	I0719 12:09:47.599590    6100 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:09:47.599615    6100 client.go:171] duration metric: took 270.863ms to LocalClient.Create
	I0719 12:09:49.601828    6100 start.go:128] duration metric: took 2.298165583s to createHost
	I0719 12:09:49.601896    6100 start.go:83] releasing machines lock for "newest-cni-090000", held for 2.298291459s
	W0719 12:09:49.601985    6100 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:49.618419    6100 out.go:177] * Deleting "newest-cni-090000" in qemu2 ...
	W0719 12:09:49.648466    6100 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:49.648498    6100 start.go:729] Will try again in 5 seconds ...
	I0719 12:09:54.650620    6100 start.go:360] acquireMachinesLock for newest-cni-090000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:09:54.656990    6100 start.go:364] duration metric: took 6.2985ms to acquireMachinesLock for "newest-cni-090000"
	I0719 12:09:54.657075    6100 start.go:93] Provisioning new machine with config: &{Name:newest-cni-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-090000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 12:09:54.657318    6100 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 12:09:54.664898    6100 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 12:09:54.710646    6100 start.go:159] libmachine.API.Create for "newest-cni-090000" (driver="qemu2")
	I0719 12:09:54.710702    6100 client.go:168] LocalClient.Create starting
	I0719 12:09:54.710857    6100 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/ca.pem
	I0719 12:09:54.710932    6100 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:54.710947    6100 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:54.711005    6100 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19307-1066/.minikube/certs/cert.pem
	I0719 12:09:54.711055    6100 main.go:141] libmachine: Decoding PEM data...
	I0719 12:09:54.711071    6100 main.go:141] libmachine: Parsing certificate...
	I0719 12:09:54.711611    6100 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 12:09:54.862217    6100 main.go:141] libmachine: Creating SSH key...
	I0719 12:09:54.932783    6100 main.go:141] libmachine: Creating Disk image...
	I0719 12:09:54.932793    6100 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 12:09:54.932992    6100 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/disk.qcow2.raw /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/disk.qcow2
	I0719 12:09:54.948527    6100 main.go:141] libmachine: STDOUT: 
	I0719 12:09:54.948545    6100 main.go:141] libmachine: STDERR: 
	I0719 12:09:54.948603    6100 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/disk.qcow2 +20000M
	I0719 12:09:54.957108    6100 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 12:09:54.957137    6100 main.go:141] libmachine: STDERR: 
	I0719 12:09:54.957149    6100 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/disk.qcow2
	I0719 12:09:54.957154    6100 main.go:141] libmachine: Starting QEMU VM...
	I0719 12:09:54.957160    6100 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:09:54.957194    6100 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:58:f4:f3:36:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/disk.qcow2
	I0719 12:09:54.958943    6100 main.go:141] libmachine: STDOUT: 
	I0719 12:09:54.958965    6100 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:09:54.958980    6100 client.go:171] duration metric: took 248.275958ms to LocalClient.Create
	I0719 12:09:56.961166    6100 start.go:128] duration metric: took 2.303839125s to createHost
	I0719 12:09:56.961271    6100 start.go:83] releasing machines lock for "newest-cni-090000", held for 2.304282958s
	W0719 12:09:56.961737    6100 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-090000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-090000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:56.971277    6100 out.go:177] 
	W0719 12:09:56.975341    6100 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:09:56.975383    6100 out.go:239] * 
	* 
	W0719 12:09:56.977675    6100 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:09:56.991222    6100 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-090000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-090000 -n newest-cni-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-090000 -n newest-cni-090000: exit status 7 (64.266333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-090000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-747000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-747000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.918620333s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-747000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-747000" primary control-plane node in "default-k8s-diff-port-747000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-747000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-747000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:09:48.805134    6120 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:09:48.805255    6120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:48.805258    6120 out.go:304] Setting ErrFile to fd 2...
	I0719 12:09:48.805261    6120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:48.805391    6120 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:09:48.806524    6120 out.go:298] Setting JSON to false
	I0719 12:09:48.822641    6120 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4151,"bootTime":1721412037,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 12:09:48.822711    6120 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 12:09:48.828509    6120 out.go:177] * [default-k8s-diff-port-747000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 12:09:48.834484    6120 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 12:09:48.834550    6120 notify.go:220] Checking for updates...
	I0719 12:09:48.840481    6120 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 12:09:48.843481    6120 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 12:09:48.846451    6120 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 12:09:48.849506    6120 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 12:09:48.850875    6120 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 12:09:48.853714    6120 config.go:182] Loaded profile config "default-k8s-diff-port-747000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:09:48.853985    6120 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 12:09:48.858511    6120 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 12:09:48.863488    6120 start.go:297] selected driver: qemu2
	I0719 12:09:48.863503    6120 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-747000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:09:48.863563    6120 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 12:09:48.865887    6120 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 12:09:48.865908    6120 cni.go:84] Creating CNI manager for ""
	I0719 12:09:48.865916    6120 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 12:09:48.865944    6120 start.go:340] cluster config:
	{Name:default-k8s-diff-port-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-747000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:09:48.869334    6120 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:09:48.877481    6120 out.go:177] * Starting "default-k8s-diff-port-747000" primary control-plane node in "default-k8s-diff-port-747000" cluster
	I0719 12:09:48.881471    6120 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 12:09:48.881484    6120 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 12:09:48.881492    6120 cache.go:56] Caching tarball of preloaded images
	I0719 12:09:48.881545    6120 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 12:09:48.881551    6120 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 12:09:48.881598    6120 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/default-k8s-diff-port-747000/config.json ...
	I0719 12:09:48.882073    6120 start.go:360] acquireMachinesLock for default-k8s-diff-port-747000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:09:49.602047    6120 start.go:364] duration metric: took 719.962958ms to acquireMachinesLock for "default-k8s-diff-port-747000"
	I0719 12:09:49.602168    6120 start.go:96] Skipping create...Using existing machine configuration
	I0719 12:09:49.602219    6120 fix.go:54] fixHost starting: 
	I0719 12:09:49.602898    6120 fix.go:112] recreateIfNeeded on default-k8s-diff-port-747000: state=Stopped err=<nil>
	W0719 12:09:49.602956    6120 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 12:09:49.611409    6120 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-747000" ...
	I0719 12:09:49.622503    6120 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:09:49.622743    6120 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:0e:2b:bd:ff:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/disk.qcow2
	I0719 12:09:49.634297    6120 main.go:141] libmachine: STDOUT: 
	I0719 12:09:49.634401    6120 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:09:49.634520    6120 fix.go:56] duration metric: took 32.313792ms for fixHost
	I0719 12:09:49.634542    6120 start.go:83] releasing machines lock for "default-k8s-diff-port-747000", held for 32.440959ms
	W0719 12:09:49.634571    6120 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:09:49.634726    6120 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:49.634741    6120 start.go:729] Will try again in 5 seconds ...
	I0719 12:09:54.636907    6120 start.go:360] acquireMachinesLock for default-k8s-diff-port-747000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:09:54.637353    6120 start.go:364] duration metric: took 346.333µs to acquireMachinesLock for "default-k8s-diff-port-747000"
	I0719 12:09:54.637491    6120 start.go:96] Skipping create...Using existing machine configuration
	I0719 12:09:54.637510    6120 fix.go:54] fixHost starting: 
	I0719 12:09:54.638327    6120 fix.go:112] recreateIfNeeded on default-k8s-diff-port-747000: state=Stopped err=<nil>
	W0719 12:09:54.638358    6120 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 12:09:54.644030    6120 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-747000" ...
	I0719 12:09:54.646891    6120 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:09:54.647101    6120 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:0e:2b:bd:ff:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/default-k8s-diff-port-747000/disk.qcow2
	I0719 12:09:54.656712    6120 main.go:141] libmachine: STDOUT: 
	I0719 12:09:54.656779    6120 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:09:54.656874    6120 fix.go:56] duration metric: took 19.363375ms for fixHost
	I0719 12:09:54.656903    6120 start.go:83] releasing machines lock for "default-k8s-diff-port-747000", held for 19.510541ms
	W0719 12:09:54.657093    6120 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-747000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-747000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:09:54.671968    6120 out.go:177] 
	W0719 12:09:54.675924    6120 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:09:54.675945    6120 out.go:239] * 
	* 
	W0719 12:09:54.677455    6120 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:09:54.686926    6120 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-747000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-747000 -n default-k8s-diff-port-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-747000 -n default-k8s-diff-port-747000: exit status 7 (45.938791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-747000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-747000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-747000 -n default-k8s-diff-port-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-747000 -n default-k8s-diff-port-747000: exit status 7 (33.92525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-747000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-747000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-747000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-747000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.275292ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-747000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-747000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-747000 -n default-k8s-diff-port-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-747000 -n default-k8s-diff-port-747000: exit status 7 (33.230167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-747000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-747000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-747000 -n default-k8s-diff-port-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-747000 -n default-k8s-diff-port-747000: exit status 7 (28.953ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-747000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-747000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-747000 --alsologtostderr -v=1: exit status 83 (43.534375ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-747000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-747000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:09:54.939388    6142 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:09:54.939523    6142 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:54.939526    6142 out.go:304] Setting ErrFile to fd 2...
	I0719 12:09:54.939528    6142 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:09:54.939662    6142 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:09:54.939873    6142 out.go:298] Setting JSON to false
	I0719 12:09:54.939880    6142 mustload.go:65] Loading cluster: default-k8s-diff-port-747000
	I0719 12:09:54.940105    6142 config.go:182] Loaded profile config "default-k8s-diff-port-747000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 12:09:54.944012    6142 out.go:177] * The control-plane node default-k8s-diff-port-747000 host is not running: state=Stopped
	I0719 12:09:54.951990    6142 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-747000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-747000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-747000 -n default-k8s-diff-port-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-747000 -n default-k8s-diff-port-747000: exit status 7 (28.344416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-747000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-747000 -n default-k8s-diff-port-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-747000 -n default-k8s-diff-port-747000: exit status 7 (28.104208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-747000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-090000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-090000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.172133167s)

                                                
                                                
-- stdout --
	* [newest-cni-090000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-090000" primary control-plane node in "newest-cni-090000" cluster
	* Restarting existing qemu2 VM for "newest-cni-090000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-090000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:10:01.229310    6190 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:10:01.229459    6190 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:10:01.229462    6190 out.go:304] Setting ErrFile to fd 2...
	I0719 12:10:01.229465    6190 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:10:01.229586    6190 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:10:01.230599    6190 out.go:298] Setting JSON to false
	I0719 12:10:01.246454    6190 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4164,"bootTime":1721412037,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 12:10:01.246516    6190 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 12:10:01.251568    6190 out.go:177] * [newest-cni-090000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 12:10:01.257488    6190 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 12:10:01.257565    6190 notify.go:220] Checking for updates...
	I0719 12:10:01.264552    6190 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 12:10:01.267519    6190 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 12:10:01.270541    6190 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 12:10:01.273551    6190 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 12:10:01.274828    6190 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 12:10:01.277846    6190 config.go:182] Loaded profile config "newest-cni-090000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0719 12:10:01.278125    6190 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 12:10:01.282559    6190 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 12:10:01.287550    6190 start.go:297] selected driver: qemu2
	I0719 12:10:01.287556    6190 start.go:901] validating driver "qemu2" against &{Name:newest-cni-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-090000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:10:01.287626    6190 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 12:10:01.290017    6190 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0719 12:10:01.290038    6190 cni.go:84] Creating CNI manager for ""
	I0719 12:10:01.290045    6190 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 12:10:01.290076    6190 start.go:340] cluster config:
	{Name:newest-cni-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-090000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 12:10:01.293612    6190 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 12:10:01.301479    6190 out.go:177] * Starting "newest-cni-090000" primary control-plane node in "newest-cni-090000" cluster
	I0719 12:10:01.305565    6190 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 12:10:01.305582    6190 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0719 12:10:01.305591    6190 cache.go:56] Caching tarball of preloaded images
	I0719 12:10:01.305673    6190 preload.go:172] Found /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 12:10:01.305688    6190 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0719 12:10:01.305751    6190 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/newest-cni-090000/config.json ...
	I0719 12:10:01.306195    6190 start.go:360] acquireMachinesLock for newest-cni-090000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:10:01.306231    6190 start.go:364] duration metric: took 29.333µs to acquireMachinesLock for "newest-cni-090000"
	I0719 12:10:01.306239    6190 start.go:96] Skipping create...Using existing machine configuration
	I0719 12:10:01.306246    6190 fix.go:54] fixHost starting: 
	I0719 12:10:01.306366    6190 fix.go:112] recreateIfNeeded on newest-cni-090000: state=Stopped err=<nil>
	W0719 12:10:01.306379    6190 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 12:10:01.309558    6190 out.go:177] * Restarting existing qemu2 VM for "newest-cni-090000" ...
	I0719 12:10:01.317603    6190 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:10:01.317650    6190 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:58:f4:f3:36:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/disk.qcow2
	I0719 12:10:01.319718    6190 main.go:141] libmachine: STDOUT: 
	I0719 12:10:01.319737    6190 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:10:01.319767    6190 fix.go:56] duration metric: took 13.521083ms for fixHost
	I0719 12:10:01.319772    6190 start.go:83] releasing machines lock for "newest-cni-090000", held for 13.5375ms
	W0719 12:10:01.319778    6190 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:10:01.319819    6190 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:10:01.319824    6190 start.go:729] Will try again in 5 seconds ...
	I0719 12:10:06.321759    6190 start.go:360] acquireMachinesLock for newest-cni-090000: {Name:mk3436fb720b09552c99de743381135a5372bc5b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 12:10:06.322150    6190 start.go:364] duration metric: took 279.208µs to acquireMachinesLock for "newest-cni-090000"
	I0719 12:10:06.322256    6190 start.go:96] Skipping create...Using existing machine configuration
	I0719 12:10:06.322277    6190 fix.go:54] fixHost starting: 
	I0719 12:10:06.322955    6190 fix.go:112] recreateIfNeeded on newest-cni-090000: state=Stopped err=<nil>
	W0719 12:10:06.322980    6190 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 12:10:06.328603    6190 out.go:177] * Restarting existing qemu2 VM for "newest-cni-090000" ...
	I0719 12:10:06.332409    6190 qemu.go:418] Using hvf for hardware acceleration
	I0719 12:10:06.332622    6190 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:58:f4:f3:36:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19307-1066/.minikube/machines/newest-cni-090000/disk.qcow2
	I0719 12:10:06.341515    6190 main.go:141] libmachine: STDOUT: 
	I0719 12:10:06.341579    6190 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 12:10:06.341658    6190 fix.go:56] duration metric: took 19.381708ms for fixHost
	I0719 12:10:06.341681    6190 start.go:83] releasing machines lock for "newest-cni-090000", held for 19.506792ms
	W0719 12:10:06.341850    6190 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-090000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-090000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 12:10:06.349320    6190 out.go:177] 
	W0719 12:10:06.353514    6190 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 12:10:06.353545    6190 out.go:239] * 
	* 
	W0719 12:10:06.356956    6190 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 12:10:06.360487    6190 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-090000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-090000 -n newest-cni-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-090000 -n newest-cni-090000: exit status 7 (66.383875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-090000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-090000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-090000 -n newest-cni-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-090000 -n newest-cni-090000: exit status 7 (29.643375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-090000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-090000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-090000 --alsologtostderr -v=1: exit status 83 (39.918334ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-090000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-090000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 12:10:06.539706    6204 out.go:291] Setting OutFile to fd 1 ...
	I0719 12:10:06.539877    6204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:10:06.539880    6204 out.go:304] Setting ErrFile to fd 2...
	I0719 12:10:06.539882    6204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 12:10:06.540023    6204 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 12:10:06.540245    6204 out.go:298] Setting JSON to false
	I0719 12:10:06.540251    6204 mustload.go:65] Loading cluster: newest-cni-090000
	I0719 12:10:06.540451    6204 config.go:182] Loaded profile config "newest-cni-090000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0719 12:10:06.543745    6204 out.go:177] * The control-plane node newest-cni-090000 host is not running: state=Stopped
	I0719 12:10:06.547708    6204 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-090000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-090000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-090000 -n newest-cni-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-090000 -n newest-cni-090000: exit status 7 (29.198709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-090000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-090000 -n newest-cni-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-090000 -n newest-cni-090000: exit status 7 (29.91075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-090000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (162/282)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 7.38
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.07
18 TestDownloadOnly/v1.30.3/DeleteAll 0.1
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-beta.0/json-events 7.11
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.1
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.34
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 207.58
38 TestAddons/parallel/Registry 13.24
39 TestAddons/parallel/Ingress 19.86
40 TestAddons/parallel/InspektorGadget 10.22
41 TestAddons/parallel/MetricsServer 5.27
44 TestAddons/parallel/CSI 34.98
45 TestAddons/parallel/Headlamp 13.45
46 TestAddons/parallel/CloudSpanner 5.16
47 TestAddons/parallel/LocalPath 40.94
48 TestAddons/parallel/NvidiaDevicePlugin 5.15
49 TestAddons/parallel/Yakd 5
50 TestAddons/parallel/Volcano 38.81
53 TestAddons/serial/GCPAuth/Namespaces 0.07
54 TestAddons/StoppedEnableDisable 12.39
62 TestHyperKitDriverInstallOrUpdate 10.46
65 TestErrorSpam/setup 35.93
66 TestErrorSpam/start 0.33
67 TestErrorSpam/status 0.22
68 TestErrorSpam/pause 0.67
69 TestErrorSpam/unpause 0.62
70 TestErrorSpam/stop 64.28
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 51.02
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 64.94
77 TestFunctional/serial/KubeContext 0.03
78 TestFunctional/serial/KubectlGetPods 0.05
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.42
82 TestFunctional/serial/CacheCmd/cache/add_local 1.11
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
86 TestFunctional/serial/CacheCmd/cache/cache_reload 0.64
87 TestFunctional/serial/CacheCmd/cache/delete 0.07
88 TestFunctional/serial/MinikubeKubectlCmd 0.66
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.92
90 TestFunctional/serial/ExtraConfig 38.32
91 TestFunctional/serial/ComponentHealth 0.04
92 TestFunctional/serial/LogsCmd 0.67
93 TestFunctional/serial/LogsFileCmd 0.61
94 TestFunctional/serial/InvalidService 3.88
96 TestFunctional/parallel/ConfigCmd 0.22
97 TestFunctional/parallel/DashboardCmd 12.09
98 TestFunctional/parallel/DryRun 0.22
99 TestFunctional/parallel/InternationalLanguage 0.12
100 TestFunctional/parallel/StatusCmd 0.24
105 TestFunctional/parallel/AddonsCmd 0.1
106 TestFunctional/parallel/PersistentVolumeClaim 26.57
108 TestFunctional/parallel/SSHCmd 0.12
109 TestFunctional/parallel/CpCmd 0.42
111 TestFunctional/parallel/FileSync 0.06
112 TestFunctional/parallel/CertSync 0.38
116 TestFunctional/parallel/NodeLabels 0.04
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.08
120 TestFunctional/parallel/License 0.23
121 TestFunctional/parallel/Version/short 0.04
122 TestFunctional/parallel/Version/components 0.15
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
127 TestFunctional/parallel/ImageCommands/ImageBuild 1.61
128 TestFunctional/parallel/ImageCommands/Setup 1.77
129 TestFunctional/parallel/DockerEnv/bash 0.27
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
133 TestFunctional/parallel/ServiceCmd/DeployApp 11.09
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.47
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.36
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.15
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.14
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.14
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.23
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.17
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.67
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.1
146 TestFunctional/parallel/ServiceCmd/List 0.08
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.08
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.09
149 TestFunctional/parallel/ServiceCmd/Format 0.09
150 TestFunctional/parallel/ServiceCmd/URL 0.09
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
152 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
153 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
155 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
156 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
157 TestFunctional/parallel/ProfileCmd/profile_not_create 0.12
158 TestFunctional/parallel/ProfileCmd/profile_list 0.12
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
160 TestFunctional/parallel/MountCmd/any-port 3.94
161 TestFunctional/parallel/MountCmd/specific-port 0.97
162 TestFunctional/parallel/MountCmd/VerifyCleanup 1.74
163 TestFunctional/delete_echo-server_images 0.03
164 TestFunctional/delete_my-image_image 0.01
165 TestFunctional/delete_minikube_cached_images 0.01
169 TestMultiControlPlane/serial/StartCluster 200.48
170 TestMultiControlPlane/serial/DeployApp 5.47
171 TestMultiControlPlane/serial/PingHostFromPods 0.76
172 TestMultiControlPlane/serial/AddWorkerNode 169.1
173 TestMultiControlPlane/serial/NodeLabels 0.15
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.66
175 TestMultiControlPlane/serial/CopyFile 4.29
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 78.1
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 3.26
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.2
221 TestMainNoArgs 0.03
268 TestStoppedBinaryUpgrade/Setup 0.88
280 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
285 TestNoKubernetes/serial/ProfileList 31.43
286 TestNoKubernetes/serial/Stop 3.59
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
291 TestStoppedBinaryUpgrade/MinikubeLogs 0.62
305 TestStartStop/group/old-k8s-version/serial/Stop 2.03
306 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
314 TestStartStop/group/no-preload/serial/Stop 3.53
317 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
325 TestStartStop/group/embed-certs/serial/Stop 3.63
328 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
336 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.76
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.11
345 TestStartStop/group/newest-cni/serial/DeployApp 0
346 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
347 TestStartStop/group/newest-cni/serial/Stop 3.95
348 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-914000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-914000: exit status 85 (92.353375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-914000 | jenkins | v1.33.1 | 19 Jul 24 11:12 PDT |          |
	|         | -p download-only-914000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 11:12:54
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 11:12:54.793152    1568 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:12:54.793312    1568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:12:54.793315    1568 out.go:304] Setting ErrFile to fd 2...
	I0719 11:12:54.793318    1568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:12:54.793453    1568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	W0719 11:12:54.793535    1568 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19307-1066/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19307-1066/.minikube/config/config.json: no such file or directory
	I0719 11:12:54.794759    1568 out.go:298] Setting JSON to true
	I0719 11:12:54.812311    1568 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":737,"bootTime":1721412037,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 11:12:54.812372    1568 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:12:54.817717    1568 out.go:97] [download-only-914000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 11:12:54.817855    1568 notify.go:220] Checking for updates...
	W0719 11:12:54.817911    1568 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball: no such file or directory
	I0719 11:12:54.820686    1568 out.go:169] MINIKUBE_LOCATION=19307
	I0719 11:12:54.827704    1568 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:12:54.830764    1568 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 11:12:54.833745    1568 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:12:54.841780    1568 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	W0719 11:12:54.847714    1568 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 11:12:54.847925    1568 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:12:54.851800    1568 out.go:97] Using the qemu2 driver based on user configuration
	I0719 11:12:54.851822    1568 start.go:297] selected driver: qemu2
	I0719 11:12:54.851837    1568 start.go:901] validating driver "qemu2" against <nil>
	I0719 11:12:54.851932    1568 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 11:12:54.854710    1568 out.go:169] Automatically selected the socket_vmnet network
	I0719 11:12:54.860307    1568 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0719 11:12:54.860380    1568 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 11:12:54.860418    1568 cni.go:84] Creating CNI manager for ""
	I0719 11:12:54.860423    1568 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 11:12:54.860479    1568 start.go:340] cluster config:
	{Name:download-only-914000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-914000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:12:54.865604    1568 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:12:54.869558    1568 out.go:97] Downloading VM boot image ...
	I0719 11:12:54.869573    1568 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso
	I0719 11:12:59.388201    1568 out.go:97] Starting "download-only-914000" primary control-plane node in "download-only-914000" cluster
	I0719 11:12:59.388240    1568 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 11:12:59.442350    1568 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0719 11:12:59.442374    1568 cache.go:56] Caching tarball of preloaded images
	I0719 11:12:59.442522    1568 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 11:12:59.447613    1568 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0719 11:12:59.447620    1568 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 11:12:59.530447    1568 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0719 11:13:04.743702    1568 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 11:13:04.743880    1568 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 11:13:05.439835    1568 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0719 11:13:05.440029    1568 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/download-only-914000/config.json ...
	I0719 11:13:05.440059    1568 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/download-only-914000/config.json: {Name:mk0c144cee678b853797870f94b425b1e9982c7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 11:13:05.440307    1568 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 11:13:05.440494    1568 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0719 11:13:05.839641    1568 out.go:169] 
	W0719 11:13:05.845869    1568 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19307-1066/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108a0da60 0x108a0da60 0x108a0da60 0x108a0da60 0x108a0da60 0x108a0da60 0x108a0da60] Decompressors:map[bz2:0x14000168830 gz:0x14000168838 tar:0x140001687e0 tar.bz2:0x140001687f0 tar.gz:0x14000168800 tar.xz:0x14000168810 tar.zst:0x14000168820 tbz2:0x140001687f0 tgz:0x14000168800 txz:0x14000168810 tzst:0x14000168820 xz:0x14000168840 zip:0x14000168850 zst:0x14000168848] Getters:map[file:0x1400077e6d0 http:0x140008b6190 https:0x140008b61e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0719 11:13:05.845894    1568 out_reason.go:110] 
	W0719 11:13:05.853737    1568 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 11:13:05.857675    1568 out.go:169] 
	
	
	* The control-plane node download-only-914000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-914000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-914000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (7.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-388000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-388000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (7.382415167s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (7.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-388000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-388000: exit status 85 (73.603084ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-914000 | jenkins | v1.33.1 | 19 Jul 24 11:12 PDT |                     |
	|         | -p download-only-914000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Jul 24 11:13 PDT | 19 Jul 24 11:13 PDT |
	| delete  | -p download-only-914000        | download-only-914000 | jenkins | v1.33.1 | 19 Jul 24 11:13 PDT | 19 Jul 24 11:13 PDT |
	| start   | -o=json --download-only        | download-only-388000 | jenkins | v1.33.1 | 19 Jul 24 11:13 PDT |                     |
	|         | -p download-only-388000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 11:13:06
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 11:13:06.264633    1592 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:13:06.264749    1592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:13:06.264752    1592 out.go:304] Setting ErrFile to fd 2...
	I0719 11:13:06.264754    1592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:13:06.264890    1592 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:13:06.265915    1592 out.go:298] Setting JSON to true
	I0719 11:13:06.282708    1592 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":749,"bootTime":1721412037,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 11:13:06.282765    1592 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:13:06.287604    1592 out.go:97] [download-only-388000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 11:13:06.287680    1592 notify.go:220] Checking for updates...
	I0719 11:13:06.291449    1592 out.go:169] MINIKUBE_LOCATION=19307
	I0719 11:13:06.294637    1592 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:13:06.298681    1592 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 11:13:06.301697    1592 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:13:06.304627    1592 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	W0719 11:13:06.310492    1592 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 11:13:06.310638    1592 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:13:06.313540    1592 out.go:97] Using the qemu2 driver based on user configuration
	I0719 11:13:06.313550    1592 start.go:297] selected driver: qemu2
	I0719 11:13:06.313554    1592 start.go:901] validating driver "qemu2" against <nil>
	I0719 11:13:06.313603    1592 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 11:13:06.316632    1592 out.go:169] Automatically selected the socket_vmnet network
	I0719 11:13:06.321808    1592 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0719 11:13:06.321887    1592 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 11:13:06.321909    1592 cni.go:84] Creating CNI manager for ""
	I0719 11:13:06.321918    1592 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 11:13:06.321925    1592 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 11:13:06.321963    1592 start.go:340] cluster config:
	{Name:download-only-388000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-388000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:13:06.325320    1592 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:13:06.328571    1592 out.go:97] Starting "download-only-388000" primary control-plane node in "download-only-388000" cluster
	I0719 11:13:06.328579    1592 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 11:13:06.386127    1592 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 11:13:06.386146    1592 cache.go:56] Caching tarball of preloaded images
	I0719 11:13:06.386320    1592 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 11:13:06.391558    1592 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0719 11:13:06.391566    1592 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0719 11:13:06.476636    1592 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 11:13:11.236659    1592 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0719 11:13:11.236994    1592 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-388000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-388000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-388000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (7.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-903000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-903000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (7.108352208s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (7.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-903000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-903000: exit status 85 (75.742208ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-914000 | jenkins | v1.33.1 | 19 Jul 24 11:12 PDT |                     |
	|         | -p download-only-914000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 19 Jul 24 11:13 PDT | 19 Jul 24 11:13 PDT |
	| delete  | -p download-only-914000             | download-only-914000 | jenkins | v1.33.1 | 19 Jul 24 11:13 PDT | 19 Jul 24 11:13 PDT |
	| start   | -o=json --download-only             | download-only-388000 | jenkins | v1.33.1 | 19 Jul 24 11:13 PDT |                     |
	|         | -p download-only-388000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 19 Jul 24 11:13 PDT | 19 Jul 24 11:13 PDT |
	| delete  | -p download-only-388000             | download-only-388000 | jenkins | v1.33.1 | 19 Jul 24 11:13 PDT | 19 Jul 24 11:13 PDT |
	| start   | -o=json --download-only             | download-only-903000 | jenkins | v1.33.1 | 19 Jul 24 11:13 PDT |                     |
	|         | -p download-only-903000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 11:13:13
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 11:13:13.927256    1618 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:13:13.927395    1618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:13:13.927399    1618 out.go:304] Setting ErrFile to fd 2...
	I0719 11:13:13.927401    1618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:13:13.927528    1618 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:13:13.928590    1618 out.go:298] Setting JSON to true
	I0719 11:13:13.944363    1618 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":756,"bootTime":1721412037,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 11:13:13.944429    1618 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:13:13.949226    1618 out.go:97] [download-only-903000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 11:13:13.949307    1618 notify.go:220] Checking for updates...
	I0719 11:13:13.953152    1618 out.go:169] MINIKUBE_LOCATION=19307
	I0719 11:13:13.957287    1618 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:13:13.961211    1618 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 11:13:13.964280    1618 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:13:13.967266    1618 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	W0719 11:13:13.973168    1618 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 11:13:13.973315    1618 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:13:13.976217    1618 out.go:97] Using the qemu2 driver based on user configuration
	I0719 11:13:13.976229    1618 start.go:297] selected driver: qemu2
	I0719 11:13:13.976234    1618 start.go:901] validating driver "qemu2" against <nil>
	I0719 11:13:13.976302    1618 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 11:13:13.977644    1618 out.go:169] Automatically selected the socket_vmnet network
	I0719 11:13:13.982277    1618 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0719 11:13:13.982375    1618 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 11:13:13.982404    1618 cni.go:84] Creating CNI manager for ""
	I0719 11:13:13.982410    1618 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 11:13:13.982415    1618 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 11:13:13.982455    1618 start.go:340] cluster config:
	{Name:download-only-903000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-903000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:13:13.985896    1618 iso.go:125] acquiring lock: {Name:mka36757a451272d8b2240983716efec40d03311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 11:13:13.993327    1618 out.go:97] Starting "download-only-903000" primary control-plane node in "download-only-903000" cluster
	I0719 11:13:13.993334    1618 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 11:13:14.045058    1618 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0719 11:13:14.045078    1618 cache.go:56] Caching tarball of preloaded images
	I0719 11:13:14.045286    1618 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 11:13:14.049210    1618 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0719 11:13:14.049216    1618 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 11:13:14.129110    1618 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0719 11:13:18.395195    1618 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 11:13:18.395371    1618 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 11:13:18.914094    1618 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0719 11:13:18.914313    1618 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/download-only-903000/config.json ...
	I0719 11:13:18.914332    1618 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/download-only-903000/config.json: {Name:mk2d25cfbebe86c818e46844a1de5779230230c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 11:13:18.914574    1618 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 11:13:18.914698    1618 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19307-1066/.minikube/cache/darwin/arm64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-903000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-903000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-903000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.34s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-791000 --alsologtostderr --binary-mirror http://127.0.0.1:49326 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-791000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-791000
--- PASS: TestBinaryMirror (0.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-949000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-949000: exit status 85 (56.33375ms)

                                                
                                                
-- stdout --
	* Profile "addons-949000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-949000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-949000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-949000: exit status 85 (52.59275ms)

                                                
                                                
-- stdout --
	* Profile "addons-949000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-949000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (207.58s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-949000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-949000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m27.581755209s)
--- PASS: TestAddons/Setup (207.58s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 8.576834ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-q85g5" [721b2e00-a948-4388-895f-e7ee308d6db9] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003824083s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-tkxvv" [b3127eed-fe9f-4fdd-bf6b-bbaee0ff3b72] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004462542s
addons_test.go:342: (dbg) Run:  kubectl --context addons-949000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-949000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-949000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.908803125s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-949000 ip
2024/07/19 11:17:02 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-949000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.24s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-949000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-949000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-949000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4ba78f8f-075e-4e70-919e-7d494dcb23e0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4ba78f8f-075e-4e70-919e-7d494dcb23e0] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.001898375s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-949000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-949000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-949000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-949000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-darwin-arm64 -p addons-949000 addons disable ingress-dns --alsologtostderr -v=1: (1.102707625s)
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-949000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-949000 addons disable ingress --alsologtostderr -v=1: (7.193918625s)
--- PASS: TestAddons/parallel/Ingress (19.86s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-kq8tf" [b92c74db-7933-470a-811c-aa126b26bd58] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004730333s
addons_test.go:843: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-949000
addons_test.go:843: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-949000: (5.215158167s)
--- PASS: TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.518625ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-g4sf8" [7aa14917-03c7-457d-8c45-ece91e118893] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003828792s
addons_test.go:417: (dbg) Run:  kubectl --context addons-949000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-949000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.27s)

                                                
                                    
x
+
TestAddons/parallel/CSI (34.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 5.594084ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-949000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-949000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-949000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-949000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a5848e21-18d9-4a0a-88ab-101434f72313] Pending
helpers_test.go:344: "task-pv-pod" [a5848e21-18d9-4a0a-88ab-101434f72313] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a5848e21-18d9-4a0a-88ab-101434f72313] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003724875s
addons_test.go:586: (dbg) Run:  kubectl --context addons-949000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-949000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-949000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-949000 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-949000 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-949000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-949000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-949000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-949000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-949000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-949000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-949000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-949000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-949000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-949000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-949000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-949000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [2a2d85f5-522f-4428-9c92-f6b23adb7c25] Pending
helpers_test.go:344: "task-pv-pod-restore" [2a2d85f5-522f-4428-9c92-f6b23adb7c25] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [2a2d85f5-522f-4428-9c92-f6b23adb7c25] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003828375s
addons_test.go:628: (dbg) Run:  kubectl --context addons-949000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-949000 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-949000 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-darwin-arm64 -p addons-949000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-darwin-arm64 -p addons-949000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.065873958s)
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-949000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (34.98s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-949000 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-m5zdj" [3f3841c5-842d-477a-8364-c2a3b3887fbe] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-m5zdj" [3f3841c5-842d-477a-8364-c2a3b3887fbe] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.003630709s
--- PASS: TestAddons/parallel/Headlamp (13.45s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-zqng8" [097070aa-1399-4431-9f93-b8dea3858f23] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003895125s
addons_test.go:862: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-949000
--- PASS: TestAddons/parallel/CloudSpanner (5.16s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (40.94s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-949000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-949000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-949000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-949000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-949000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-949000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-949000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-949000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [28c741dd-822d-44c9-9411-ac29b2d9b9f0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [28c741dd-822d-44c9-9411-ac29b2d9b9f0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [28c741dd-822d-44c9-9411-ac29b2d9b9f0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005166458s
addons_test.go:992: (dbg) Run:  kubectl --context addons-949000 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-darwin-arm64 -p addons-949000 ssh "cat /opt/local-path-provisioner/pvc-3f1e06bc-1d7c-490e-83d8-d3e67d3172ab_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-949000 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-949000 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-darwin-arm64 -p addons-949000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-darwin-arm64 -p addons-949000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.427907167s)
--- PASS: TestAddons/parallel/LocalPath (40.94s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-d6djb" [8f3f0ad4-9d92-4c93-a8be-791b0110b8fd] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003950792s
addons_test.go:1056: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-949000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-6zf78" [0e8ab16e-81e7-4491-bb74-4e42dcb2123c] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003575s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (38.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:905: volcano-controller stabilized in 1.575333ms
addons_test.go:889: volcano-scheduler stabilized in 1.751ms
addons_test.go:897: volcano-admission stabilized in 1.92175ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-c69h4" [32cd35c0-2fbd-4110-ae34-75fc0daf15c7] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.003688666s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-98g7p" [a9699dad-43ca-4264-a496-22f93e3aa066] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.003705959s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-b8zwg" [7a668bbc-d0dd-41ab-95be-0dbff4e10d7e] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.003708084s
addons_test.go:924: (dbg) Run:  kubectl --context addons-949000 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-949000 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-949000 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [34f014ad-8fe7-4c9e-8b26-59996766c258] Pending
helpers_test.go:344: "test-job-nginx-0" [34f014ad-8fe7-4c9e-8b26-59996766c258] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [34f014ad-8fe7-4c9e-8b26-59996766c258] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 14.003393958s
addons_test.go:960: (dbg) Run:  out/minikube-darwin-arm64 -p addons-949000 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-darwin-arm64 -p addons-949000 addons disable volcano --alsologtostderr -v=1: (9.620681083s)
--- PASS: TestAddons/parallel/Volcano (38.81s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-949000 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-949000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-949000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-949000: (12.198351167s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-949000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-949000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-949000
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.46s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.46s)

                                                
                                    
x
+
TestErrorSpam/setup (35.93s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-381000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-381000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-381000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-381000 --driver=qemu2 : (35.932544875s)
--- PASS: TestErrorSpam/setup (35.93s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-381000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-381000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-381000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-381000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-381000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-381000 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.22s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-381000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-381000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-381000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-381000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-381000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-381000 status
--- PASS: TestErrorSpam/status (0.22s)

                                                
                                    
x
+
TestErrorSpam/pause (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-381000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-381000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-381000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-381000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-381000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-381000 pause
--- PASS: TestErrorSpam/pause (0.67s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-381000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-381000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-381000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-381000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-381000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-381000 unpause
--- PASS: TestErrorSpam/unpause (0.62s)

                                                
                                    
x
+
TestErrorSpam/stop (64.28s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-381000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-381000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-381000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-381000 stop: (12.194016167s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-381000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-381000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-381000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-381000 stop: (26.058169417s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-381000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-381000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-381000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-381000 stop: (26.025455375s)
--- PASS: TestErrorSpam/stop (64.28s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19307-1066/.minikube/files/etc/test/nested/copy/1565/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.02s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-189000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-189000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (51.018463s)
--- PASS: TestFunctional/serial/StartWithProxy (51.02s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (64.94s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-189000 --alsologtostderr -v=8
E0719 11:21:49.560934    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
E0719 11:21:49.569421    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
E0719 11:21:49.579887    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
E0719 11:21:49.600984    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
E0719 11:21:49.643216    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
E0719 11:21:49.725493    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
E0719 11:21:49.887601    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
E0719 11:21:50.208277    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
E0719 11:21:50.848727    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
E0719 11:21:52.130901    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
E0719 11:21:54.693056    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
E0719 11:21:59.815159    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
E0719 11:22:10.057254    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
E0719 11:22:30.539232    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-189000 --alsologtostderr -v=8: (1m4.940684292s)
functional_test.go:659: soft start took 1m4.941046167s for "functional-189000" cluster.
--- PASS: TestFunctional/serial/SoftStart (64.94s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-189000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-189000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local3762732341/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 cache add minikube-local-cache-test:functional-189000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 cache delete minikube-local-cache-test:functional-189000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-189000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-189000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (71.060334ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 kubectl -- --context functional-189000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-189000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.32s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-189000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0719 11:23:11.501184    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-189000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.319524833s)
functional_test.go:757: restart took 38.319639125s for "functional-189000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.32s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-189000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.67s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd1141098830/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.61s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.88s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-189000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-189000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-189000: exit status 115 (97.259875ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31464 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-189000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.88s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-189000 config get cpus: exit status 14 (33.647959ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-189000 config get cpus: exit status 14 (28.831125ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-189000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-189000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2354: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.09s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-189000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-189000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (115.250083ms)

                                                
                                                
-- stdout --
	* [functional-189000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:24:12.596459    2330 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:24:12.596614    2330 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:24:12.596617    2330 out.go:304] Setting ErrFile to fd 2...
	I0719 11:24:12.596620    2330 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:24:12.596755    2330 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:24:12.597825    2330 out.go:298] Setting JSON to false
	I0719 11:24:12.614642    2330 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1415,"bootTime":1721412037,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 11:24:12.614716    2330 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:24:12.620239    2330 out.go:177] * [functional-189000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 11:24:12.627029    2330 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 11:24:12.627068    2330 notify.go:220] Checking for updates...
	I0719 11:24:12.635053    2330 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:24:12.638060    2330 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 11:24:12.641043    2330 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:24:12.644004    2330 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 11:24:12.647075    2330 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 11:24:12.650346    2330 config.go:182] Loaded profile config "functional-189000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:24:12.650604    2330 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:24:12.654981    2330 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 11:24:12.660973    2330 start.go:297] selected driver: qemu2
	I0719 11:24:12.660983    2330 start.go:901] validating driver "qemu2" against &{Name:functional-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:24:12.661029    2330 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 11:24:12.666999    2330 out.go:177] 
	W0719 11:24:12.671067    2330 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0719 11:24:12.674980    2330 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-189000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-189000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-189000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (119.892375ms)

                                                
                                                
-- stdout --
	* [functional-189000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 11:24:12.815709    2343 out.go:291] Setting OutFile to fd 1 ...
	I0719 11:24:12.815814    2343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:24:12.815817    2343 out.go:304] Setting ErrFile to fd 2...
	I0719 11:24:12.815820    2343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 11:24:12.815946    2343 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
	I0719 11:24:12.817473    2343 out.go:298] Setting JSON to false
	I0719 11:24:12.836149    2343 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1415,"bootTime":1721412037,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0719 11:24:12.836245    2343 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 11:24:12.841003    2343 out.go:177] * [functional-189000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0719 11:24:12.849070    2343 out.go:177]   - MINIKUBE_LOCATION=19307
	I0719 11:24:12.849117    2343 notify.go:220] Checking for updates...
	I0719 11:24:12.856007    2343 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	I0719 11:24:12.863006    2343 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 11:24:12.866055    2343 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 11:24:12.869015    2343 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	I0719 11:24:12.872026    2343 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 11:24:12.875788    2343 config.go:182] Loaded profile config "functional-189000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 11:24:12.876050    2343 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 11:24:12.880074    2343 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0719 11:24:12.887004    2343 start.go:297] selected driver: qemu2
	I0719 11:24:12.887012    2343 start.go:901] validating driver "qemu2" against &{Name:functional-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 11:24:12.887057    2343 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 11:24:12.892867    2343 out.go:177] 
	W0719 11:24:12.897054    2343 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0719 11:24:12.901041    2343 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9eec432d-4c97-4aa7-8ae4-94533daf7927] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00385425s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-189000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-189000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-189000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-189000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [370939c5-e947-43af-8926-af523a740154] Pending
helpers_test.go:344: "sp-pod" [370939c5-e947-43af-8926-af523a740154] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [370939c5-e947-43af-8926-af523a740154] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.00508425s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-189000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-189000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-189000 delete -f testdata/storage-provisioner/pod.yaml: (1.168520666s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-189000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [87c5472f-35e7-48ee-bfc4-c408bb120a4c] Pending
helpers_test.go:344: "sp-pod" [87c5472f-35e7-48ee-bfc4-c408bb120a4c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [87c5472f-35e7-48ee-bfc4-c408bb120a4c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003771s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-189000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.57s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh -n functional-189000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 cp functional-189000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3535347809/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh -n functional-189000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh -n functional-189000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1565/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh "sudo cat /etc/test/nested/copy/1565/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1565.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh "sudo cat /etc/ssl/certs/1565.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1565.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh "sudo cat /usr/share/ca-certificates/1565.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/15652.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh "sudo cat /etc/ssl/certs/15652.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/15652.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh "sudo cat /usr/share/ca-certificates/15652.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-189000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-189000 ssh "sudo systemctl is-active crio": exit status 1 (82.653208ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-189000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-189000
docker.io/kicbase/echo-server:functional-189000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-189000 image ls --format short --alsologtostderr:
I0719 11:24:14.332221    2369 out.go:291] Setting OutFile to fd 1 ...
I0719 11:24:14.332362    2369 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:24:14.332371    2369 out.go:304] Setting ErrFile to fd 2...
I0719 11:24:14.332374    2369 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:24:14.332510    2369 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
I0719 11:24:14.332949    2369 config.go:182] Loaded profile config "functional-189000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 11:24:14.333014    2369 config.go:182] Loaded profile config "functional-189000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 11:24:14.333864    2369 ssh_runner.go:195] Run: systemctl --version
I0719 11:24:14.333873    2369 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/functional-189000/id_rsa Username:docker}
I0719 11:24:14.356524    2369 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-189000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/minikube-local-cache-test | functional-189000 | 14605d8b3fde1 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 61773190d42ff | 112MB  |
| registry.k8s.io/kube-scheduler              | v1.30.3           | d48f992a22722 | 60.5MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| docker.io/kicbase/echo-server               | functional-189000 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 2351f570ed0ea | 87.9MB |
| docker.io/library/nginx                     | alpine            | 5461b18aaccf3 | 44.8MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 8e97cdb19e7cc | 107MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/nginx                     | latest            | 443d199e8bfcc | 193MB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-189000 image ls --format table --alsologtostderr:
I0719 11:24:14.532804    2375 out.go:291] Setting OutFile to fd 1 ...
I0719 11:24:14.532962    2375 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:24:14.532968    2375 out.go:304] Setting ErrFile to fd 2...
I0719 11:24:14.532971    2375 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:24:14.533105    2375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
I0719 11:24:14.533605    2375 config.go:182] Loaded profile config "functional-189000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 11:24:14.533665    2375 config.go:182] Loaded profile config "functional-189000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 11:24:14.534489    2375 ssh_runner.go:195] Run: systemctl --version
I0719 11:24:14.534497    2375 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/functional-189000/id_rsa Username:docker}
I0719 11:24:14.557111    2375 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-189000 image ls --format json --alsologtostderr:
[{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-189000"],"size":"4780000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"112000000"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"87900000"},{"id":"443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbf
d1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"14605d8b3fde14df8fd9cd651ab1dbb5203049d236dd254e7c1e955db5b4737d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-189000"],"size":"30"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"60500000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"107000000"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDige
sts":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-189000 image ls --format json --alsologtostderr:
I0719 11:24:14.467808    2373 out.go:291] Setting OutFile to fd 1 ...
I0719 11:24:14.467970    2373 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:24:14.467974    2373 out.go:304] Setting ErrFile to fd 2...
I0719 11:24:14.467976    2373 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:24:14.468097    2373 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
I0719 11:24:14.468505    2373 config.go:182] Loaded profile config "functional-189000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 11:24:14.468569    2373 config.go:182] Loaded profile config "functional-189000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 11:24:14.469419    2373 ssh_runner.go:195] Run: systemctl --version
I0719 11:24:14.469428    2373 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/functional-189000/id_rsa Username:docker}
I0719 11:24:14.491905    2373 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-189000 image ls --format yaml --alsologtostderr:
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "112000000"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "107000000"
- id: 5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: 14605d8b3fde14df8fd9cd651ab1dbb5203049d236dd254e7c1e955db5b4737d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-189000
size: "30"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "60500000"
- id: 443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-189000
size: "4780000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "87900000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-189000 image ls --format yaml --alsologtostderr:
I0719 11:24:14.399785    2371 out.go:291] Setting OutFile to fd 1 ...
I0719 11:24:14.399952    2371 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:24:14.399955    2371 out.go:304] Setting ErrFile to fd 2...
I0719 11:24:14.399957    2371 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:24:14.400105    2371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
I0719 11:24:14.400509    2371 config.go:182] Loaded profile config "functional-189000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 11:24:14.400565    2371 config.go:182] Loaded profile config "functional-189000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 11:24:14.401444    2371 ssh_runner.go:195] Run: systemctl --version
I0719 11:24:14.401453    2371 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/functional-189000/id_rsa Username:docker}
I0719 11:24:14.424752    2371 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-189000 ssh pgrep buildkitd: exit status 1 (56.297125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 image build -t localhost/my-image:functional-189000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-189000 image build -t localhost/my-image:functional-189000 testdata/build --alsologtostderr: (1.478877292s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-189000 image build -t localhost/my-image:functional-189000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in 3caaf08deae7
---> Removed intermediate container 3caaf08deae7
---> 8e8c41866ff2
Step 3/3 : ADD content.txt /
---> e2bb5a62cc58
Successfully built e2bb5a62cc58
Successfully tagged localhost/my-image:functional-189000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-189000 image build -t localhost/my-image:functional-189000 testdata/build --alsologtostderr:
I0719 11:24:14.656504    2379 out.go:291] Setting OutFile to fd 1 ...
I0719 11:24:14.656734    2379 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:24:14.656740    2379 out.go:304] Setting ErrFile to fd 2...
I0719 11:24:14.656742    2379 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 11:24:14.656874    2379 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19307-1066/.minikube/bin
I0719 11:24:14.657327    2379 config.go:182] Loaded profile config "functional-189000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 11:24:14.658066    2379 config.go:182] Loaded profile config "functional-189000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 11:24:14.658923    2379 ssh_runner.go:195] Run: systemctl --version
I0719 11:24:14.658932    2379 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19307-1066/.minikube/machines/functional-189000/id_rsa Username:docker}
I0719 11:24:14.682786    2379 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.341330869.tar
I0719 11:24:14.682834    2379 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0719 11:24:14.686298    2379 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.341330869.tar
I0719 11:24:14.687866    2379 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.341330869.tar: stat -c "%s %y" /var/lib/minikube/build/build.341330869.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.341330869.tar': No such file or directory
I0719 11:24:14.687881    2379 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.341330869.tar --> /var/lib/minikube/build/build.341330869.tar (3072 bytes)
I0719 11:24:14.696583    2379 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.341330869
I0719 11:24:14.700017    2379 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.341330869 -xf /var/lib/minikube/build/build.341330869.tar
I0719 11:24:14.703823    2379 docker.go:360] Building image: /var/lib/minikube/build/build.341330869
I0719 11:24:14.703868    2379 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-189000 /var/lib/minikube/build/build.341330869
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0719 11:24:16.050001    2379 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-189000 /var/lib/minikube/build/build.341330869: (1.34613225s)
I0719 11:24:16.050062    2379 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.341330869
I0719 11:24:16.054685    2379 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.341330869.tar
I0719 11:24:16.058040    2379 build_images.go:217] Built localhost/my-image:functional-189000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.341330869.tar
I0719 11:24:16.058063    2379 build_images.go:133] succeeded building to: functional-189000
I0719 11:24:16.058079    2379 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 image ls
2024/07/19 11:24:24 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.751903166s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-189000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-189000 docker-env) && out/minikube-darwin-arm64 status -p functional-189000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-189000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-189000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-189000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-5rxbg" [6a747b20-9e56-4f73-ad82-1cff7ab89275] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-5rxbg" [6a747b20-9e56-4f73-ad82-1cff7ab89275] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003864125s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 image load --daemon docker.io/kicbase/echo-server:functional-189000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 image load --daemon docker.io/kicbase/echo-server:functional-189000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-189000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 image load --daemon docker.io/kicbase/echo-server:functional-189000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 image save docker.io/kicbase/echo-server:functional-189000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 image rm docker.io/kicbase/echo-server:functional-189000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-189000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 image save --daemon docker.io/kicbase/echo-server:functional-189000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-189000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-189000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-189000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-189000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2185: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-189000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-189000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-189000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [0e229bae-0209-47fb-854c-356a3f91f59c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [0e229bae-0209-47fb-854c-356a3f91f59c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004343875s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 service list -o json
functional_test.go:1490: Took "79.007333ms" to run "out/minikube-darwin-arm64 -p functional-189000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:30714
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:30714
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-189000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.111.130 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-189000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "81.882375ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "34.640917ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "80.730042ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "33.483875ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (3.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-189000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1265029812/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721413445917824000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1265029812/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721413445917824000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1265029812/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721413445917824000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1265029812/001/test-1721413445917824000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-189000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (58.253292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 19 18:24 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 19 18:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 19 18:24 test-1721413445917824000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh cat /mount-9p/test-1721413445917824000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-189000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0a5856e5-98f6-481e-aa1c-c875e571993b] Pending
helpers_test.go:344: "busybox-mount" [0a5856e5-98f6-481e-aa1c-c875e571993b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0a5856e5-98f6-481e-aa1c-c875e571993b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [0a5856e5-98f6-481e-aa1c-c875e571993b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003987875s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-189000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-189000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1265029812/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-189000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3349167846/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-189000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (58.547417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-189000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3349167846/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-189000 ssh "sudo umount -f /mount-9p": exit status 1 (57.292209ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-189000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-189000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3349167846/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-189000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4275643947/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-189000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4275643947/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-189000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4275643947/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-189000 ssh "findmnt -T" /mount1: exit status 1 (64.433958ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-189000 ssh "findmnt -T" /mount3: exit status 1 (55.570083ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-189000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-189000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-189000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4275643947/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-189000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4275643947/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-189000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4275643947/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.74s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-189000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-189000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-189000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (200.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-604000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0719 11:24:33.422784    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
E0719 11:26:49.557947    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
E0719 11:27:17.263689    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/addons-949000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-604000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m20.28796925s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (200.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-604000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-604000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-604000 -- rollout status deployment/busybox: (4.014298458s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-604000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-604000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-604000 -- exec busybox-fc5497c4f-4lcpd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-604000 -- exec busybox-fc5497c4f-t6l9h -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-604000 -- exec busybox-fc5497c4f-tx7gv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-604000 -- exec busybox-fc5497c4f-4lcpd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-604000 -- exec busybox-fc5497c4f-t6l9h -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-604000 -- exec busybox-fc5497c4f-tx7gv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-604000 -- exec busybox-fc5497c4f-4lcpd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-604000 -- exec busybox-fc5497c4f-t6l9h -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-604000 -- exec busybox-fc5497c4f-tx7gv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-604000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-604000 -- exec busybox-fc5497c4f-4lcpd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-604000 -- exec busybox-fc5497c4f-4lcpd -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-604000 -- exec busybox-fc5497c4f-t6l9h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-604000 -- exec busybox-fc5497c4f-t6l9h -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-604000 -- exec busybox-fc5497c4f-tx7gv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-604000 -- exec busybox-fc5497c4f-tx7gv -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (169.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-604000 -v=7 --alsologtostderr
E0719 11:28:27.310069    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
E0719 11:28:27.316106    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
E0719 11:28:27.328236    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
E0719 11:28:27.350289    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
E0719 11:28:27.391812    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
E0719 11:28:27.473997    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
E0719 11:28:27.636205    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
E0719 11:28:27.956432    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
E0719 11:28:28.597174    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
E0719 11:28:29.879388    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
E0719 11:28:32.441561    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
E0719 11:28:37.563775    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
E0719 11:28:47.805365    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
E0719 11:29:08.287354    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
E0719 11:29:49.249199    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19307-1066/.minikube/profiles/functional-189000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-604000 -v=7 --alsologtostderr: (2m48.873669s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (169.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-604000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1.663289208s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 cp testdata/cp-test.txt ha-604000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 cp ha-604000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2728938450/001/cp-test_ha-604000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 cp ha-604000:/home/docker/cp-test.txt ha-604000-m02:/home/docker/cp-test_ha-604000_ha-604000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m02 "sudo cat /home/docker/cp-test_ha-604000_ha-604000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 cp ha-604000:/home/docker/cp-test.txt ha-604000-m03:/home/docker/cp-test_ha-604000_ha-604000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m03 "sudo cat /home/docker/cp-test_ha-604000_ha-604000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 cp ha-604000:/home/docker/cp-test.txt ha-604000-m04:/home/docker/cp-test_ha-604000_ha-604000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m04 "sudo cat /home/docker/cp-test_ha-604000_ha-604000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 cp testdata/cp-test.txt ha-604000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 cp ha-604000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2728938450/001/cp-test_ha-604000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 cp ha-604000-m02:/home/docker/cp-test.txt ha-604000:/home/docker/cp-test_ha-604000-m02_ha-604000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000 "sudo cat /home/docker/cp-test_ha-604000-m02_ha-604000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 cp ha-604000-m02:/home/docker/cp-test.txt ha-604000-m03:/home/docker/cp-test_ha-604000-m02_ha-604000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m03 "sudo cat /home/docker/cp-test_ha-604000-m02_ha-604000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 cp ha-604000-m02:/home/docker/cp-test.txt ha-604000-m04:/home/docker/cp-test_ha-604000-m02_ha-604000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m04 "sudo cat /home/docker/cp-test_ha-604000-m02_ha-604000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 cp testdata/cp-test.txt ha-604000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 cp ha-604000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2728938450/001/cp-test_ha-604000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 cp ha-604000-m03:/home/docker/cp-test.txt ha-604000:/home/docker/cp-test_ha-604000-m03_ha-604000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000 "sudo cat /home/docker/cp-test_ha-604000-m03_ha-604000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 cp ha-604000-m03:/home/docker/cp-test.txt ha-604000-m02:/home/docker/cp-test_ha-604000-m03_ha-604000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m02 "sudo cat /home/docker/cp-test_ha-604000-m03_ha-604000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 cp ha-604000-m03:/home/docker/cp-test.txt ha-604000-m04:/home/docker/cp-test_ha-604000-m03_ha-604000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m04 "sudo cat /home/docker/cp-test_ha-604000-m03_ha-604000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 cp testdata/cp-test.txt ha-604000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 cp ha-604000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2728938450/001/cp-test_ha-604000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 cp ha-604000-m04:/home/docker/cp-test.txt ha-604000:/home/docker/cp-test_ha-604000-m04_ha-604000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000 "sudo cat /home/docker/cp-test_ha-604000-m04_ha-604000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 cp ha-604000-m04:/home/docker/cp-test.txt ha-604000-m02:/home/docker/cp-test_ha-604000-m04_ha-604000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m02 "sudo cat /home/docker/cp-test_ha-604000-m04_ha-604000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 cp ha-604000-m04:/home/docker/cp-test.txt ha-604000-m03:/home/docker/cp-test_ha-604000-m04_ha-604000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-604000 ssh -n ha-604000-m03 "sudo cat /home/docker/cp-test_ha-604000-m04_ha-604000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (78.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m18.096525375s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (78.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.26s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-773000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-773000 --output=json --user=testUser: (3.261311083s)
--- PASS: TestJSONOutput/stop/Command (3.26s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-659000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-659000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.981334ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d200bc40-b76b-4da5-9857-139bbbe2f9c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-659000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"332286b3-5c36-45f4-9e0a-e2c214cf155a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19307"}}
	{"specversion":"1.0","id":"0649c8da-4933-407a-b55f-0145ad9a0400","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig"}}
	{"specversion":"1.0","id":"1f112854-f463-4726-82c3-98e90e32744b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"b61925a7-0ffd-4710-a54b-9166a4976220","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b0ad792b-23ec-422a-80c1-07104c4fd413","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube"}}
	{"specversion":"1.0","id":"aaed9214-30ee-4e46-8fd3-d32e86fd0e0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ed16ad2c-2d93-414f-9b20-29a59706bf3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-659000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-659000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-733000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-733000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (111.651833ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-733000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19307
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19307-1066/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19307-1066/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-733000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-733000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.092875ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-733000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-733000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.648226542s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.777186625s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-733000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-733000: (3.585346292s)
--- PASS: TestNoKubernetes/serial/Stop (3.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-733000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-733000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (36.611958ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-733000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-733000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-275000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-120000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-120000 --alsologtostderr -v=3: (2.027094584s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-120000 -n old-k8s-version-120000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-120000 -n old-k8s-version-120000: exit status 7 (52.658125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-120000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-371000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-371000 --alsologtostderr -v=3: (3.529219542s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-371000 -n no-preload-371000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-371000 -n no-preload-371000: exit status 7 (60.026333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-371000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-262000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-262000 --alsologtostderr -v=3: (3.629945917s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-262000 -n embed-certs-262000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-262000 -n embed-certs-262000: exit status 7 (64.008541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-262000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-747000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-747000 --alsologtostderr -v=3: (1.762643667s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-747000 -n default-k8s-diff-port-747000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-747000 -n default-k8s-diff-port-747000: exit status 7 (46.566167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-747000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-090000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-090000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-090000 --alsologtostderr -v=3: (3.945856917s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-090000 -n newest-cni-090000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-090000 -n newest-cni-090000: exit status 7 (61.850709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-090000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/282)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-601000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-601000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-601000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-601000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-601000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-601000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-601000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-601000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-601000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-601000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-601000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-601000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-601000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-601000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-601000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-601000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-601000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-601000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-601000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-601000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-601000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-601000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-601000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-601000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-601000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-601000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-601000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-601000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-601000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-601000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-601000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-601000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-601000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601000"

                                                
                                                
----------------------- debugLogs end: cilium-601000 [took: 2.167096167s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-601000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-601000
--- SKIP: TestNetworkPlugins/group/cilium (2.27s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-677000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-677000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.10s)

                                                
                                    
Copied to clipboard